00:00:00.002 Started by upstream project "autotest-per-patch" build number 126237 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.113 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.114 The recommended git tool is: git 00:00:00.114 using credential 00000000-0000-0000-0000-000000000002 00:00:00.116 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.154 Fetching changes from the remote Git repository 00:00:00.156 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.203 Using shallow fetch with depth 1 00:00:00.203 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.203 > git --version # timeout=10 00:00:00.231 > git --version # 'git version 2.39.2' 00:00:00.231 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.245 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.245 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.428 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.441 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.452 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:06.452 > git config core.sparsecheckout # timeout=10 00:00:06.462 > git read-tree -mu HEAD # timeout=10 00:00:06.477 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:06.496 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:06.496 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:06.573 [Pipeline] Start of Pipeline 00:00:06.586 [Pipeline] library 00:00:06.587 Loading library shm_lib@master 00:00:06.588 Library shm_lib@master is cached. Copying from home. 00:00:06.602 [Pipeline] node 00:00:06.609 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.610 [Pipeline] { 00:00:06.620 [Pipeline] catchError 00:00:06.621 [Pipeline] { 00:00:06.632 [Pipeline] wrap 00:00:06.639 [Pipeline] { 00:00:06.645 [Pipeline] stage 00:00:06.646 [Pipeline] { (Prologue) 00:00:06.812 [Pipeline] sh 00:00:07.095 + logger -p user.info -t JENKINS-CI 00:00:07.137 [Pipeline] echo 00:00:07.138 Node: CYP9 00:00:07.144 [Pipeline] sh 00:00:07.444 [Pipeline] setCustomBuildProperty 00:00:07.452 [Pipeline] echo 00:00:07.453 Cleanup processes 00:00:07.457 [Pipeline] sh 00:00:07.742 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.742 1835404 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.755 [Pipeline] sh 00:00:08.038 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.039 ++ grep -v 'sudo pgrep' 00:00:08.039 ++ awk '{print $1}' 00:00:08.039 + sudo kill -9 00:00:08.039 + true 00:00:08.052 [Pipeline] cleanWs 00:00:08.060 [WS-CLEANUP] Deleting project workspace... 00:00:08.060 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.066 [WS-CLEANUP] done 00:00:08.069 [Pipeline] setCustomBuildProperty 00:00:08.079 [Pipeline] sh 00:00:08.360 + sudo git config --global --replace-all safe.directory '*' 00:00:08.469 [Pipeline] httpRequest 00:00:08.500 [Pipeline] echo 00:00:08.502 Sorcerer 10.211.164.101 is alive 00:00:08.510 [Pipeline] httpRequest 00:00:08.515 HttpMethod: GET 00:00:08.516 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.516 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.519 Response Code: HTTP/1.1 200 OK 00:00:08.519 Success: Status code 200 is in the accepted range: 200,404 00:00:08.520 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.638 [Pipeline] sh 00:00:09.930 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.949 [Pipeline] httpRequest 00:00:09.973 [Pipeline] echo 00:00:09.974 Sorcerer 10.211.164.101 is alive 00:00:09.983 [Pipeline] httpRequest 00:00:09.988 HttpMethod: GET 00:00:09.988 URL: http://10.211.164.101/packages/spdk_b26ca8289b58648c0816f83720e3b904274a249c.tar.gz 00:00:09.989 Sending request to url: http://10.211.164.101/packages/spdk_b26ca8289b58648c0816f83720e3b904274a249c.tar.gz 00:00:10.011 Response Code: HTTP/1.1 200 OK 00:00:10.011 Success: Status code 200 is in the accepted range: 200,404 00:00:10.012 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b26ca8289b58648c0816f83720e3b904274a249c.tar.gz 00:01:03.023 [Pipeline] sh 00:01:03.312 + tar --no-same-owner -xf spdk_b26ca8289b58648c0816f83720e3b904274a249c.tar.gz 00:01:06.622 [Pipeline] sh 00:01:06.905 + git -C spdk log --oneline -n5 00:01:06.906 b26ca8289 event: add enforce_numa app option 00:01:06.906 83c8cffdc env: add enforce_numa environment option 00:01:06.906 804b11b4b env_dpdk: assert that SOCKET_ID_ANY == SPDK_ENV_SOCKET_ID_ANY 00:01:06.906 cdc37ee83 env_dpdk: deprecate spdk_env_opts_init and spdk_env_init 00:01:06.906 24018edd4 all: replace spdk_env_opts_init/spdk_env_init with _ext variant 00:01:06.918 [Pipeline] } 00:01:06.935 [Pipeline] // stage 00:01:06.945 [Pipeline] stage 00:01:06.947 [Pipeline] { (Prepare) 00:01:06.964 [Pipeline] writeFile 00:01:06.980 [Pipeline] sh 00:01:07.259 + logger -p user.info -t JENKINS-CI 00:01:07.273 [Pipeline] sh 00:01:07.557 + logger -p user.info -t JENKINS-CI 00:01:07.570 [Pipeline] sh 00:01:07.855 + cat autorun-spdk.conf 00:01:07.855 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.855 SPDK_TEST_NVMF=1 00:01:07.855 SPDK_TEST_NVME_CLI=1 00:01:07.855 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.855 SPDK_TEST_NVMF_NICS=e810 00:01:07.855 SPDK_TEST_VFIOUSER=1 00:01:07.855 SPDK_RUN_UBSAN=1 00:01:07.855 NET_TYPE=phy 00:01:07.863 RUN_NIGHTLY=0 00:01:07.868 [Pipeline] readFile 00:01:07.896 [Pipeline] withEnv 00:01:07.898 [Pipeline] { 00:01:07.913 [Pipeline] sh 00:01:08.236 + set -ex 00:01:08.236 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:08.236 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:08.236 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.236 ++ SPDK_TEST_NVMF=1 00:01:08.236 ++ SPDK_TEST_NVME_CLI=1 00:01:08.236 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.236 ++ SPDK_TEST_NVMF_NICS=e810 00:01:08.236 ++ SPDK_TEST_VFIOUSER=1 00:01:08.236 ++ SPDK_RUN_UBSAN=1 00:01:08.236 ++ NET_TYPE=phy 00:01:08.236 ++ RUN_NIGHTLY=0 00:01:08.236 + case $SPDK_TEST_NVMF_NICS in 00:01:08.236 + DRIVERS=ice 00:01:08.236 + [[ tcp == \r\d\m\a ]] 00:01:08.236 + [[ -n ice ]] 00:01:08.236 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:08.236 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:08.236 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:08.236 rmmod: ERROR: Module irdma is not currently loaded 00:01:08.236 rmmod: ERROR: Module i40iw is not currently loaded 00:01:08.236 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:08.236 + true 00:01:08.236 + for D in $DRIVERS 00:01:08.236 + sudo modprobe ice 00:01:08.236 + exit 0 00:01:08.246 [Pipeline] } 00:01:08.264 [Pipeline] // withEnv 00:01:08.267 [Pipeline] } 00:01:08.283 [Pipeline] // stage 00:01:08.290 [Pipeline] catchError 00:01:08.291 [Pipeline] { 00:01:08.304 [Pipeline] timeout 00:01:08.304 Timeout set to expire in 50 min 00:01:08.306 [Pipeline] { 00:01:08.321 [Pipeline] stage 00:01:08.322 [Pipeline] { (Tests) 00:01:08.334 [Pipeline] sh 00:01:08.618 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:08.618 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:08.618 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:08.618 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:08.618 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:08.618 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:08.618 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:08.618 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:08.618 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:08.618 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:08.618 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:08.618 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:08.618 + source /etc/os-release 00:01:08.618 ++ NAME='Fedora Linux' 00:01:08.618 ++ VERSION='38 (Cloud Edition)' 00:01:08.618 ++ ID=fedora 00:01:08.618 ++ VERSION_ID=38 00:01:08.618 ++ VERSION_CODENAME= 00:01:08.618 ++ PLATFORM_ID=platform:f38 00:01:08.618 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:08.618 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:08.618 ++ LOGO=fedora-logo-icon 00:01:08.618 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:08.618 ++ HOME_URL=https://fedoraproject.org/ 00:01:08.618 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:08.618 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:08.618 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:08.618 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:08.618 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:08.618 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:08.618 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:08.618 ++ SUPPORT_END=2024-05-14 00:01:08.618 ++ VARIANT='Cloud Edition' 00:01:08.618 ++ VARIANT_ID=cloud 00:01:08.618 + uname -a 00:01:08.618 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:08.618 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:11.916 Hugepages 00:01:11.916 node hugesize free / total 00:01:11.916 node0 1048576kB 0 / 0 00:01:11.916 node0 2048kB 0 / 0 00:01:11.916 node1 1048576kB 0 / 0 00:01:11.916 node1 2048kB 0 / 0 00:01:11.916 00:01:11.916 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:11.916 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:11.916 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:11.916 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:11.916 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:11.916 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:11.916 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:11.916 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:11.916 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:11.916 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:11.916 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:11.916 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:11.916 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:11.916 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:11.916 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:11.916 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:11.916 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:11.916 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:11.916 + rm -f /tmp/spdk-ld-path 00:01:11.916 + source autorun-spdk.conf 00:01:11.916 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.916 ++ SPDK_TEST_NVMF=1 00:01:11.916 ++ SPDK_TEST_NVME_CLI=1 00:01:11.916 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.916 ++ SPDK_TEST_NVMF_NICS=e810 00:01:11.916 ++ SPDK_TEST_VFIOUSER=1 00:01:11.916 ++ SPDK_RUN_UBSAN=1 00:01:11.916 ++ NET_TYPE=phy 00:01:11.916 ++ RUN_NIGHTLY=0 00:01:11.916 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:11.916 + [[ -n '' ]] 00:01:11.916 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:11.916 + for M in /var/spdk/build-*-manifest.txt 00:01:11.916 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:11.916 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:11.916 + for M in /var/spdk/build-*-manifest.txt 00:01:11.916 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:11.916 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:11.916 ++ uname 00:01:11.916 + [[ Linux == \L\i\n\u\x ]] 00:01:11.916 + sudo dmesg -T 00:01:11.916 + sudo dmesg --clear 00:01:11.916 + dmesg_pid=1836380 00:01:11.916 + [[ Fedora Linux == FreeBSD ]] 00:01:11.916 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:11.916 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:11.916 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:11.916 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:11.917 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:11.917 + [[ -x /usr/src/fio-static/fio ]] 00:01:11.917 + export FIO_BIN=/usr/src/fio-static/fio 00:01:11.917 + FIO_BIN=/usr/src/fio-static/fio 00:01:11.917 + sudo dmesg -Tw 00:01:11.917 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:11.917 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:11.917 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:11.917 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:11.917 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:11.917 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:11.917 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:11.917 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:11.917 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.917 Test configuration: 00:01:11.917 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.917 SPDK_TEST_NVMF=1 00:01:11.917 SPDK_TEST_NVME_CLI=1 00:01:11.917 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.917 SPDK_TEST_NVMF_NICS=e810 00:01:11.917 SPDK_TEST_VFIOUSER=1 00:01:11.917 SPDK_RUN_UBSAN=1 00:01:11.917 NET_TYPE=phy 00:01:11.917 RUN_NIGHTLY=0 21:17:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:11.917 21:17:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:11.917 21:17:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:11.917 21:17:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:11.917 21:17:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.917 21:17:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.917 21:17:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.917 21:17:01 -- paths/export.sh@5 -- $ export PATH 00:01:11.917 21:17:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.917 21:17:01 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:11.917 21:17:01 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:11.917 21:17:01 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721071021.XXXXXX 00:01:11.917 21:17:01 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721071021.IKIr5X 00:01:11.917 21:17:01 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:11.917 21:17:01 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:11.917 21:17:01 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:11.917 21:17:01 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:11.917 21:17:01 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:11.917 21:17:01 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:11.917 21:17:01 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:11.917 21:17:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.917 21:17:01 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:11.917 21:17:01 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:11.917 21:17:01 -- pm/common@17 -- $ local monitor 00:01:11.917 21:17:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.917 21:17:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.917 21:17:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.917 21:17:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.917 21:17:01 -- pm/common@21 -- $ date +%s 00:01:11.917 21:17:01 -- pm/common@21 -- $ date +%s 00:01:11.917 21:17:01 -- pm/common@25 -- $ sleep 1 00:01:11.917 21:17:01 -- pm/common@21 -- $ date +%s 00:01:11.917 21:17:01 -- pm/common@21 -- $ date +%s 00:01:11.917 21:17:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721071021 00:01:11.917 21:17:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721071021 00:01:11.917 21:17:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721071021 00:01:11.917 21:17:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721071021 00:01:11.917 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721071021_collect-cpu-load.pm.log 00:01:11.917 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721071021_collect-vmstat.pm.log 00:01:11.917 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721071021_collect-cpu-temp.pm.log 00:01:11.917 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721071021_collect-bmc-pm.bmc.pm.log 00:01:12.858 21:17:02 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:12.858 21:17:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:12.858 21:17:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:12.858 21:17:02 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.858 21:17:02 -- spdk/autobuild.sh@16 -- $ date -u 00:01:12.858 Mon Jul 15 07:17:02 PM UTC 2024 00:01:12.858 21:17:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:12.858 v24.09-pre-229-gb26ca8289 00:01:12.858 21:17:02 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:12.858 21:17:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:12.858 21:17:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:12.858 21:17:02 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:12.858 21:17:02 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:12.858 21:17:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.858 ************************************ 00:01:12.858 START TEST ubsan 00:01:12.858 ************************************ 00:01:12.858 21:17:02 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:12.858 using ubsan 00:01:12.858 00:01:12.858 real 0m0.000s 00:01:12.858 user 0m0.000s 00:01:12.858 sys 0m0.000s 00:01:12.858 21:17:02 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:12.858 21:17:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:12.858 ************************************ 00:01:12.858 END TEST ubsan 00:01:12.858 ************************************ 00:01:13.118 21:17:02 -- common/autotest_common.sh@1142 -- $ return 0 00:01:13.118 21:17:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:13.118 21:17:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:13.118 21:17:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:13.118 21:17:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:13.118 21:17:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:13.118 21:17:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:13.118 21:17:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:13.118 21:17:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:13.118 21:17:02 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:13.118 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:13.118 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:13.378 Using 'verbs' RDMA provider 00:01:29.249 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:41.485 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:41.485 Creating mk/config.mk...done. 00:01:41.485 Creating mk/cc.flags.mk...done. 00:01:41.485 Type 'make' to build. 00:01:41.485 21:17:30 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:41.485 21:17:30 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:41.485 21:17:30 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:41.485 21:17:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.485 ************************************ 00:01:41.485 START TEST make 00:01:41.485 ************************************ 00:01:41.485 21:17:30 make -- common/autotest_common.sh@1123 -- $ make -j144 00:01:41.485 make[1]: Nothing to be done for 'all'. 00:01:42.871 The Meson build system 00:01:42.871 Version: 1.3.1 00:01:42.871 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:42.871 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:42.871 Build type: native build 00:01:42.871 Project name: libvfio-user 00:01:42.871 Project version: 0.0.1 00:01:42.871 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:42.871 C linker for the host machine: cc ld.bfd 2.39-16 00:01:42.871 Host machine cpu family: x86_64 00:01:42.871 Host machine cpu: x86_64 00:01:42.871 Run-time dependency threads found: YES 00:01:42.871 Library dl found: YES 00:01:42.871 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:42.871 Run-time dependency json-c found: YES 0.17 00:01:42.871 Run-time dependency cmocka found: YES 1.1.7 00:01:42.871 Program pytest-3 found: NO 00:01:42.871 Program flake8 found: NO 00:01:42.871 Program misspell-fixer found: NO 00:01:42.871 Program restructuredtext-lint found: NO 00:01:42.871 Program valgrind found: YES (/usr/bin/valgrind) 00:01:42.871 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:42.871 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:42.871 Compiler for C supports arguments -Wwrite-strings: YES 00:01:42.871 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:42.871 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:42.871 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:42.871 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:42.871 Build targets in project: 8 00:01:42.871 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:42.871 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:42.871 00:01:42.871 libvfio-user 0.0.1 00:01:42.871 00:01:42.871 User defined options 00:01:42.871 buildtype : debug 00:01:42.871 default_library: shared 00:01:42.871 libdir : /usr/local/lib 00:01:42.871 00:01:42.871 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:43.132 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:43.132 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:43.132 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:43.132 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:43.132 [4/37] Compiling C object samples/null.p/null.c.o 00:01:43.132 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:43.132 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:43.132 [7/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:43.132 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:43.132 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:43.132 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:43.132 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:43.132 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:43.132 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:43.132 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:43.132 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:43.132 [16/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:43.132 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:43.132 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:43.132 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:43.132 [20/37] Compiling C object samples/server.p/server.c.o 00:01:43.132 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:43.132 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:43.132 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:43.132 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:43.132 [25/37] Compiling C object samples/client.p/client.c.o 00:01:43.132 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:43.132 [27/37] Linking target samples/client 00:01:43.132 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:43.445 [29/37] Linking target test/unit_tests 00:01:43.445 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:43.445 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:43.445 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:43.445 [33/37] Linking target samples/null 00:01:43.445 [34/37] Linking target samples/server 00:01:43.445 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:43.445 [36/37] Linking target samples/lspci 00:01:43.445 [37/37] Linking target samples/gpio-pci-idio-16 00:01:43.445 INFO: autodetecting backend as ninja 00:01:43.446 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.707 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.968 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:43.968 ninja: no work to do. 00:01:50.568 The Meson build system 00:01:50.568 Version: 1.3.1 00:01:50.568 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:50.568 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:50.568 Build type: native build 00:01:50.568 Program cat found: YES (/usr/bin/cat) 00:01:50.568 Project name: DPDK 00:01:50.568 Project version: 24.03.0 00:01:50.568 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:50.568 C linker for the host machine: cc ld.bfd 2.39-16 00:01:50.568 Host machine cpu family: x86_64 00:01:50.568 Host machine cpu: x86_64 00:01:50.568 Message: ## Building in Developer Mode ## 00:01:50.568 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:50.568 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:50.568 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:50.568 Program python3 found: YES (/usr/bin/python3) 00:01:50.568 Program cat found: YES (/usr/bin/cat) 00:01:50.568 Compiler for C supports arguments -march=native: YES 00:01:50.568 Checking for size of "void *" : 8 00:01:50.568 Checking for size of "void *" : 8 (cached) 00:01:50.568 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:50.568 Library m found: YES 00:01:50.568 Library numa found: YES 00:01:50.568 Has header "numaif.h" : YES 00:01:50.568 Library fdt found: NO 00:01:50.568 Library execinfo found: NO 00:01:50.568 Has header "execinfo.h" : YES 00:01:50.568 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:50.568 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:50.568 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:50.568 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:50.568 Run-time dependency openssl found: YES 3.0.9 00:01:50.568 Run-time dependency libpcap found: YES 1.10.4 00:01:50.568 Has header "pcap.h" with dependency libpcap: YES 00:01:50.568 Compiler for C supports arguments -Wcast-qual: YES 00:01:50.568 Compiler for C supports arguments -Wdeprecated: YES 00:01:50.568 Compiler for C supports arguments -Wformat: YES 00:01:50.568 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:50.568 Compiler for C supports arguments -Wformat-security: NO 00:01:50.568 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.568 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:50.568 Compiler for C supports arguments -Wnested-externs: YES 00:01:50.568 Compiler for C supports arguments -Wold-style-definition: YES 00:01:50.568 Compiler for C supports arguments -Wpointer-arith: YES 00:01:50.568 Compiler for C supports arguments -Wsign-compare: YES 00:01:50.568 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:50.568 Compiler for C supports arguments -Wundef: YES 00:01:50.568 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.568 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:50.568 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:50.568 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.568 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:50.568 Program objdump found: YES (/usr/bin/objdump) 00:01:50.568 Compiler for C supports arguments -mavx512f: YES 00:01:50.568 Checking if "AVX512 checking" compiles: YES 00:01:50.568 Fetching value of define "__SSE4_2__" : 1 00:01:50.568 Fetching value of define "__AES__" : 1 00:01:50.568 Fetching value of define "__AVX__" : 1 00:01:50.568 Fetching value of define "__AVX2__" : 1 00:01:50.568 Fetching value of define "__AVX512BW__" : 1 00:01:50.568 Fetching value of define "__AVX512CD__" : 1 00:01:50.568 Fetching value of define "__AVX512DQ__" : 1 00:01:50.568 Fetching value of define "__AVX512F__" : 1 00:01:50.568 Fetching value of define "__AVX512VL__" : 1 00:01:50.568 Fetching value of define "__PCLMUL__" : 1 00:01:50.568 Fetching value of define "__RDRND__" : 1 00:01:50.568 Fetching value of define "__RDSEED__" : 1 00:01:50.568 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:50.568 Fetching value of define "__znver1__" : (undefined) 00:01:50.568 Fetching value of define "__znver2__" : (undefined) 00:01:50.568 Fetching value of define "__znver3__" : (undefined) 00:01:50.568 Fetching value of define "__znver4__" : (undefined) 00:01:50.568 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:50.568 Message: lib/log: Defining dependency "log" 00:01:50.568 Message: lib/kvargs: Defining dependency "kvargs" 00:01:50.568 Message: lib/telemetry: Defining dependency "telemetry" 00:01:50.568 Checking for function "getentropy" : NO 00:01:50.568 Message: lib/eal: Defining dependency "eal" 00:01:50.568 Message: lib/ring: Defining dependency "ring" 00:01:50.568 Message: lib/rcu: Defining dependency "rcu" 00:01:50.568 Message: lib/mempool: Defining dependency "mempool" 00:01:50.568 Message: lib/mbuf: Defining dependency "mbuf" 00:01:50.568 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:50.568 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:50.568 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:50.568 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:50.568 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:50.568 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:50.568 Compiler for C supports arguments -mpclmul: YES 00:01:50.568 Compiler for C supports arguments -maes: YES 00:01:50.569 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.569 Compiler for C supports arguments -mavx512bw: YES 00:01:50.569 Compiler for C supports arguments -mavx512dq: YES 00:01:50.569 Compiler for C supports arguments -mavx512vl: YES 00:01:50.569 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:50.569 Compiler for C supports arguments -mavx2: YES 00:01:50.569 Compiler for C supports arguments -mavx: YES 00:01:50.569 Message: lib/net: Defining dependency "net" 00:01:50.569 Message: lib/meter: Defining dependency "meter" 00:01:50.569 Message: lib/ethdev: Defining dependency "ethdev" 00:01:50.569 Message: lib/pci: Defining dependency "pci" 00:01:50.569 Message: lib/cmdline: Defining dependency "cmdline" 00:01:50.569 Message: lib/hash: Defining dependency "hash" 00:01:50.569 Message: lib/timer: Defining dependency "timer" 00:01:50.569 Message: lib/compressdev: Defining dependency "compressdev" 00:01:50.569 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:50.569 Message: lib/dmadev: Defining dependency "dmadev" 00:01:50.569 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:50.569 Message: lib/power: Defining dependency "power" 00:01:50.569 Message: lib/reorder: Defining dependency "reorder" 00:01:50.569 Message: lib/security: Defining dependency "security" 00:01:50.569 Has header "linux/userfaultfd.h" : YES 00:01:50.569 Has header "linux/vduse.h" : YES 00:01:50.569 Message: lib/vhost: Defining dependency "vhost" 00:01:50.569 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:50.569 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:50.569 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:50.569 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:50.569 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:50.569 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:50.569 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:50.569 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:50.569 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:50.569 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:50.569 Program doxygen found: YES (/usr/bin/doxygen) 00:01:50.569 Configuring doxy-api-html.conf using configuration 00:01:50.569 Configuring doxy-api-man.conf using configuration 00:01:50.569 Program mandb found: YES (/usr/bin/mandb) 00:01:50.569 Program sphinx-build found: NO 00:01:50.569 Configuring rte_build_config.h using configuration 00:01:50.569 Message: 00:01:50.569 ================= 00:01:50.569 Applications Enabled 00:01:50.569 ================= 00:01:50.569 00:01:50.569 apps: 00:01:50.569 00:01:50.569 00:01:50.569 Message: 00:01:50.569 ================= 00:01:50.569 Libraries Enabled 00:01:50.569 ================= 00:01:50.569 00:01:50.569 libs: 00:01:50.569 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:50.569 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:50.569 cryptodev, dmadev, power, reorder, security, vhost, 00:01:50.569 00:01:50.569 Message: 00:01:50.569 =============== 00:01:50.569 Drivers Enabled 00:01:50.569 =============== 00:01:50.569 00:01:50.569 common: 00:01:50.569 00:01:50.569 bus: 00:01:50.569 pci, vdev, 00:01:50.569 mempool: 00:01:50.569 ring, 00:01:50.569 dma: 00:01:50.569 00:01:50.569 net: 00:01:50.569 00:01:50.569 crypto: 00:01:50.569 00:01:50.569 compress: 00:01:50.569 00:01:50.569 vdpa: 00:01:50.569 00:01:50.569 00:01:50.569 Message: 00:01:50.569 ================= 00:01:50.569 Content Skipped 00:01:50.569 ================= 00:01:50.569 00:01:50.569 apps: 00:01:50.569 dumpcap: explicitly disabled via build config 00:01:50.569 graph: explicitly disabled via build config 00:01:50.569 pdump: explicitly disabled via build config 00:01:50.569 proc-info: explicitly disabled via build config 00:01:50.569 test-acl: explicitly disabled via build config 00:01:50.569 test-bbdev: explicitly disabled via build config 00:01:50.569 test-cmdline: explicitly disabled via build config 00:01:50.569 test-compress-perf: explicitly disabled via build config 00:01:50.569 test-crypto-perf: explicitly disabled via build config 00:01:50.569 test-dma-perf: explicitly disabled via build config 00:01:50.569 test-eventdev: explicitly disabled via build config 00:01:50.569 test-fib: explicitly disabled via build config 00:01:50.569 test-flow-perf: explicitly disabled via build config 00:01:50.569 test-gpudev: explicitly disabled via build config 00:01:50.569 test-mldev: explicitly disabled via build config 00:01:50.569 test-pipeline: explicitly disabled via build config 00:01:50.569 test-pmd: explicitly disabled via build config 00:01:50.569 test-regex: explicitly disabled via build config 00:01:50.569 test-sad: explicitly disabled via build config 00:01:50.569 test-security-perf: explicitly disabled via build config 00:01:50.569 00:01:50.569 libs: 00:01:50.569 argparse: explicitly disabled via build config 00:01:50.569 metrics: explicitly disabled via build config 00:01:50.569 acl: explicitly disabled via build config 00:01:50.569 bbdev: explicitly disabled via build config 00:01:50.569 bitratestats: explicitly disabled via build config 00:01:50.569 bpf: explicitly disabled via build config 00:01:50.569 cfgfile: explicitly disabled via build config 00:01:50.569 distributor: explicitly disabled via build config 00:01:50.569 efd: explicitly disabled via build config 00:01:50.569 eventdev: explicitly disabled via build config 00:01:50.569 dispatcher: explicitly disabled via build config 00:01:50.569 gpudev: explicitly disabled via build config 00:01:50.569 gro: explicitly disabled via build config 00:01:50.569 gso: explicitly disabled via build config 00:01:50.569 ip_frag: explicitly disabled via build config 00:01:50.569 jobstats: explicitly disabled via build config 00:01:50.569 latencystats: explicitly disabled via build config 00:01:50.569 lpm: explicitly disabled via build config 00:01:50.569 member: explicitly disabled via build config 00:01:50.569 pcapng: explicitly disabled via build config 00:01:50.569 rawdev: explicitly disabled via build config 00:01:50.569 regexdev: explicitly disabled via build config 00:01:50.569 mldev: explicitly disabled via build config 00:01:50.569 rib: explicitly disabled via build config 00:01:50.569 sched: explicitly disabled via build config 00:01:50.569 stack: explicitly disabled via build config 00:01:50.569 ipsec: explicitly disabled via build config 00:01:50.569 pdcp: explicitly disabled via build config 00:01:50.569 fib: explicitly disabled via build config 00:01:50.569 port: explicitly disabled via build config 00:01:50.569 pdump: explicitly disabled via build config 00:01:50.569 table: explicitly disabled via build config 00:01:50.569 pipeline: explicitly disabled via build config 00:01:50.569 graph: explicitly disabled via build config 00:01:50.569 node: explicitly disabled via build config 00:01:50.569 00:01:50.569 drivers: 00:01:50.569 common/cpt: not in enabled drivers build config 00:01:50.569 common/dpaax: not in enabled drivers build config 00:01:50.569 common/iavf: not in enabled drivers build config 00:01:50.569 common/idpf: not in enabled drivers build config 00:01:50.569 common/ionic: not in enabled drivers build config 00:01:50.569 common/mvep: not in enabled drivers build config 00:01:50.569 common/octeontx: not in enabled drivers build config 00:01:50.569 bus/auxiliary: not in enabled drivers build config 00:01:50.569 bus/cdx: not in enabled drivers build config 00:01:50.569 bus/dpaa: not in enabled drivers build config 00:01:50.569 bus/fslmc: not in enabled drivers build config 00:01:50.569 bus/ifpga: not in enabled drivers build config 00:01:50.569 bus/platform: not in enabled drivers build config 00:01:50.569 bus/uacce: not in enabled drivers build config 00:01:50.569 bus/vmbus: not in enabled drivers build config 00:01:50.569 common/cnxk: not in enabled drivers build config 00:01:50.569 common/mlx5: not in enabled drivers build config 00:01:50.569 common/nfp: not in enabled drivers build config 00:01:50.569 common/nitrox: not in enabled drivers build config 00:01:50.569 common/qat: not in enabled drivers build config 00:01:50.569 common/sfc_efx: not in enabled drivers build config 00:01:50.569 mempool/bucket: not in enabled drivers build config 00:01:50.569 mempool/cnxk: not in enabled drivers build config 00:01:50.569 mempool/dpaa: not in enabled drivers build config 00:01:50.569 mempool/dpaa2: not in enabled drivers build config 00:01:50.569 mempool/octeontx: not in enabled drivers build config 00:01:50.569 mempool/stack: not in enabled drivers build config 00:01:50.569 dma/cnxk: not in enabled drivers build config 00:01:50.569 dma/dpaa: not in enabled drivers build config 00:01:50.569 dma/dpaa2: not in enabled drivers build config 00:01:50.569 dma/hisilicon: not in enabled drivers build config 00:01:50.569 dma/idxd: not in enabled drivers build config 00:01:50.569 dma/ioat: not in enabled drivers build config 00:01:50.569 dma/skeleton: not in enabled drivers build config 00:01:50.569 net/af_packet: not in enabled drivers build config 00:01:50.569 net/af_xdp: not in enabled drivers build config 00:01:50.569 net/ark: not in enabled drivers build config 00:01:50.569 net/atlantic: not in enabled drivers build config 00:01:50.569 net/avp: not in enabled drivers build config 00:01:50.569 net/axgbe: not in enabled drivers build config 00:01:50.569 net/bnx2x: not in enabled drivers build config 00:01:50.569 net/bnxt: not in enabled drivers build config 00:01:50.569 net/bonding: not in enabled drivers build config 00:01:50.569 net/cnxk: not in enabled drivers build config 00:01:50.569 net/cpfl: not in enabled drivers build config 00:01:50.569 net/cxgbe: not in enabled drivers build config 00:01:50.569 net/dpaa: not in enabled drivers build config 00:01:50.569 net/dpaa2: not in enabled drivers build config 00:01:50.569 net/e1000: not in enabled drivers build config 00:01:50.569 net/ena: not in enabled drivers build config 00:01:50.569 net/enetc: not in enabled drivers build config 00:01:50.569 net/enetfec: not in enabled drivers build config 00:01:50.569 net/enic: not in enabled drivers build config 00:01:50.569 net/failsafe: not in enabled drivers build config 00:01:50.569 net/fm10k: not in enabled drivers build config 00:01:50.569 net/gve: not in enabled drivers build config 00:01:50.569 net/hinic: not in enabled drivers build config 00:01:50.569 net/hns3: not in enabled drivers build config 00:01:50.569 net/i40e: not in enabled drivers build config 00:01:50.569 net/iavf: not in enabled drivers build config 00:01:50.569 net/ice: not in enabled drivers build config 00:01:50.569 net/idpf: not in enabled drivers build config 00:01:50.569 net/igc: not in enabled drivers build config 00:01:50.569 net/ionic: not in enabled drivers build config 00:01:50.569 net/ipn3ke: not in enabled drivers build config 00:01:50.569 net/ixgbe: not in enabled drivers build config 00:01:50.569 net/mana: not in enabled drivers build config 00:01:50.569 net/memif: not in enabled drivers build config 00:01:50.569 net/mlx4: not in enabled drivers build config 00:01:50.569 net/mlx5: not in enabled drivers build config 00:01:50.570 net/mvneta: not in enabled drivers build config 00:01:50.570 net/mvpp2: not in enabled drivers build config 00:01:50.570 net/netvsc: not in enabled drivers build config 00:01:50.570 net/nfb: not in enabled drivers build config 00:01:50.570 net/nfp: not in enabled drivers build config 00:01:50.570 net/ngbe: not in enabled drivers build config 00:01:50.570 net/null: not in enabled drivers build config 00:01:50.570 net/octeontx: not in enabled drivers build config 00:01:50.570 net/octeon_ep: not in enabled drivers build config 00:01:50.570 net/pcap: not in enabled drivers build config 00:01:50.570 net/pfe: not in enabled drivers build config 00:01:50.570 net/qede: not in enabled drivers build config 00:01:50.570 net/ring: not in enabled drivers build config 00:01:50.570 net/sfc: not in enabled drivers build config 00:01:50.570 net/softnic: not in enabled drivers build config 00:01:50.570 net/tap: not in enabled drivers build config 00:01:50.570 net/thunderx: not in enabled drivers build config 00:01:50.570 net/txgbe: not in enabled drivers build config 00:01:50.570 net/vdev_netvsc: not in enabled drivers build config 00:01:50.570 net/vhost: not in enabled drivers build config 00:01:50.570 net/virtio: not in enabled drivers build config 00:01:50.570 net/vmxnet3: not in enabled drivers build config 00:01:50.570 raw/*: missing internal dependency, "rawdev" 00:01:50.570 crypto/armv8: not in enabled drivers build config 00:01:50.570 crypto/bcmfs: not in enabled drivers build config 00:01:50.570 crypto/caam_jr: not in enabled drivers build config 00:01:50.570 crypto/ccp: not in enabled drivers build config 00:01:50.570 crypto/cnxk: not in enabled drivers build config 00:01:50.570 crypto/dpaa_sec: not in enabled drivers build config 00:01:50.570 crypto/dpaa2_sec: not in enabled drivers build config 00:01:50.570 crypto/ipsec_mb: not in enabled drivers build config 00:01:50.570 crypto/mlx5: not in enabled drivers build config 00:01:50.570 crypto/mvsam: not in enabled drivers build config 00:01:50.570 crypto/nitrox: not in enabled drivers build config 00:01:50.570 crypto/null: not in enabled drivers build config 00:01:50.570 crypto/octeontx: not in enabled drivers build config 00:01:50.570 crypto/openssl: not in enabled drivers build config 00:01:50.570 crypto/scheduler: not in enabled drivers build config 00:01:50.570 crypto/uadk: not in enabled drivers build config 00:01:50.570 crypto/virtio: not in enabled drivers build config 00:01:50.570 compress/isal: not in enabled drivers build config 00:01:50.570 compress/mlx5: not in enabled drivers build config 00:01:50.570 compress/nitrox: not in enabled drivers build config 00:01:50.570 compress/octeontx: not in enabled drivers build config 00:01:50.570 compress/zlib: not in enabled drivers build config 00:01:50.570 regex/*: missing internal dependency, "regexdev" 00:01:50.570 ml/*: missing internal dependency, "mldev" 00:01:50.570 vdpa/ifc: not in enabled drivers build config 00:01:50.570 vdpa/mlx5: not in enabled drivers build config 00:01:50.570 vdpa/nfp: not in enabled drivers build config 00:01:50.570 vdpa/sfc: not in enabled drivers build config 00:01:50.570 event/*: missing internal dependency, "eventdev" 00:01:50.570 baseband/*: missing internal dependency, "bbdev" 00:01:50.570 gpu/*: missing internal dependency, "gpudev" 00:01:50.570 00:01:50.570 00:01:50.570 Build targets in project: 84 00:01:50.570 00:01:50.570 DPDK 24.03.0 00:01:50.570 00:01:50.570 User defined options 00:01:50.570 buildtype : debug 00:01:50.570 default_library : shared 00:01:50.570 libdir : lib 00:01:50.570 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:50.570 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:50.570 c_link_args : 00:01:50.570 cpu_instruction_set: native 00:01:50.570 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:50.570 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:50.570 enable_docs : false 00:01:50.570 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:50.570 enable_kmods : false 00:01:50.570 max_lcores : 128 00:01:50.570 tests : false 00:01:50.570 00:01:50.570 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.570 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:50.570 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:50.570 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:50.570 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:50.570 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:50.570 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:50.570 [6/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:50.570 [7/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:50.570 [8/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:50.570 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:50.570 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:50.570 [11/267] Linking static target lib/librte_kvargs.a 00:01:50.570 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:50.570 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:50.570 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:50.570 [15/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:50.570 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:50.570 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:50.570 [18/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:50.570 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:50.570 [20/267] Linking static target lib/librte_log.a 00:01:50.570 [21/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:50.570 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:50.570 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:50.570 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:50.570 [25/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:50.570 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:50.570 [27/267] Linking static target lib/librte_pci.a 00:01:50.570 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:50.570 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:50.570 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:50.570 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:50.570 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:50.570 [33/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:50.570 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:50.830 [35/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:50.830 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:50.830 [37/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:50.830 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:50.830 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:50.830 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:50.830 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:50.830 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:50.830 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:50.830 [44/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:50.830 [45/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.830 [46/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:50.830 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:50.830 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:50.830 [49/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.830 [50/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:50.830 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:50.830 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:50.830 [53/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:50.830 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:50.830 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:50.830 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:50.830 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.091 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.091 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:51.091 [60/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.091 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.091 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.091 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.091 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.091 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:51.091 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.091 [67/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:51.091 [68/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.091 [69/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:51.091 [70/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:51.091 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.091 [72/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:51.091 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.091 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:51.091 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:51.091 [76/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.091 [77/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:51.091 [78/267] Linking static target lib/librte_telemetry.a 00:01:51.091 [79/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.091 [80/267] Linking static target lib/librte_meter.a 00:01:51.091 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:51.091 [82/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:51.091 [83/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:51.091 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.091 [85/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:51.091 [86/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:51.092 [87/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.092 [88/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:51.092 [89/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:51.092 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:51.092 [91/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:51.092 [92/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:51.092 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:51.092 [94/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:51.092 [95/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.092 [96/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.092 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:51.092 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:51.092 [99/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.092 [100/267] Linking static target lib/librte_ring.a 00:01:51.092 [101/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:51.092 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:51.092 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:51.092 [104/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:51.092 [105/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:51.092 [106/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:51.092 [107/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:51.092 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:51.092 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:51.092 [110/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:51.092 [111/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:51.092 [112/267] Linking static target lib/librte_dmadev.a 00:01:51.092 [113/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:51.092 [114/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:51.092 [115/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:51.092 [116/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:51.092 [117/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:51.092 [118/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:51.092 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:51.092 [120/267] Linking static target lib/librte_timer.a 00:01:51.092 [121/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:51.092 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:51.092 [123/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:51.092 [124/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:51.092 [125/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:51.092 [126/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:51.092 [127/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:51.092 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:51.092 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:51.092 [130/267] Linking static target lib/librte_cmdline.a 00:01:51.092 [131/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.092 [132/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:51.092 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:51.092 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:51.092 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:51.092 [136/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:51.092 [137/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:51.092 [138/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:51.092 [139/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:51.092 [140/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:51.092 [141/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:51.092 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:51.092 [143/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:51.092 [144/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:51.092 [145/267] Linking static target lib/librte_rcu.a 00:01:51.092 [146/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:51.092 [147/267] Linking target lib/librte_log.so.24.1 00:01:51.092 [148/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:51.092 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:51.092 [150/267] Linking static target lib/librte_net.a 00:01:51.092 [151/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:51.092 [152/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:51.092 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:51.092 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:51.092 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:51.092 [156/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:51.092 [157/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:51.092 [158/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:51.354 [159/267] Linking static target lib/librte_mempool.a 00:01:51.354 [160/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:51.354 [161/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:51.354 [162/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:51.354 [163/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:51.354 [164/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:51.354 [165/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:51.354 [166/267] Linking static target lib/librte_reorder.a 00:01:51.354 [167/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:51.354 [168/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:51.354 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:51.354 [170/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:51.354 [171/267] Linking static target lib/librte_power.a 00:01:51.354 [172/267] Linking static target lib/librte_security.a 00:01:51.354 [173/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:51.354 [174/267] Linking static target lib/librte_eal.a 00:01:51.354 [175/267] Linking static target lib/librte_compressdev.a 00:01:51.354 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:51.354 [177/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:51.354 [178/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:51.354 [179/267] Linking static target lib/librte_mbuf.a 00:01:51.354 [180/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.354 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:51.354 [182/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:51.354 [183/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:51.354 [184/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:51.354 [185/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:51.354 [186/267] Linking target lib/librte_kvargs.so.24.1 00:01:51.354 [187/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.354 [188/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:51.354 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:51.354 [190/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:51.354 [191/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:51.354 [192/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:51.354 [193/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:51.354 [194/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:51.354 [195/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:51.354 [196/267] Linking static target drivers/librte_bus_pci.a 00:01:51.354 [197/267] Linking static target drivers/librte_bus_vdev.a 00:01:51.615 [198/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:51.615 [199/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:51.615 [200/267] Linking static target lib/librte_hash.a 00:01:51.615 [201/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:51.615 [202/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.615 [203/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.615 [204/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:51.615 [205/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:51.615 [206/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.615 [207/267] Linking static target drivers/librte_mempool_ring.a 00:01:51.615 [208/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:51.615 [209/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.615 [210/267] Linking static target lib/librte_cryptodev.a 00:01:51.615 [211/267] Linking target lib/librte_telemetry.so.24.1 00:01:51.615 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.615 [213/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.878 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:51.878 [215/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.878 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.878 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:52.139 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.139 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:52.139 [220/267] Linking static target lib/librte_ethdev.a 00:01:52.139 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.139 [222/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.139 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.400 [224/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.400 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.400 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.345 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:53.345 [228/267] Linking static target lib/librte_vhost.a 00:01:53.918 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.304 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.901 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.846 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.846 [233/267] Linking target lib/librte_eal.so.24.1 00:02:03.107 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:03.107 [235/267] Linking target lib/librte_timer.so.24.1 00:02:03.107 [236/267] Linking target lib/librte_meter.so.24.1 00:02:03.107 [237/267] Linking target lib/librte_ring.so.24.1 00:02:03.107 [238/267] Linking target lib/librte_pci.so.24.1 00:02:03.107 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:03.107 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:03.368 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:03.368 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:03.368 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:03.368 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:03.368 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:03.368 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:03.368 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:03.368 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:03.645 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:03.645 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:03.645 [251/267] Linking target lib/librte_mbuf.so.24.1 00:02:03.645 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:03.645 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:03.918 [254/267] Linking target lib/librte_net.so.24.1 00:02:03.918 [255/267] Linking target lib/librte_compressdev.so.24.1 00:02:03.918 [256/267] Linking target lib/librte_reorder.so.24.1 00:02:03.918 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:03.918 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:03.918 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:03.918 [260/267] Linking target lib/librte_security.so.24.1 00:02:03.918 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:03.918 [262/267] Linking target lib/librte_hash.so.24.1 00:02:03.918 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:04.179 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:04.179 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:04.179 [266/267] Linking target lib/librte_power.so.24.1 00:02:04.179 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:04.179 INFO: autodetecting backend as ninja 00:02:04.179 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:05.564 CC lib/ut/ut.o 00:02:05.564 CC lib/log/log.o 00:02:05.564 CC lib/log/log_flags.o 00:02:05.564 CC lib/log/log_deprecated.o 00:02:05.564 CC lib/ut_mock/mock.o 00:02:05.564 LIB libspdk_log.a 00:02:05.564 LIB libspdk_ut.a 00:02:05.564 LIB libspdk_ut_mock.a 00:02:05.564 SO libspdk_ut.so.2.0 00:02:05.564 SO libspdk_log.so.7.0 00:02:05.564 SO libspdk_ut_mock.so.6.0 00:02:05.564 SYMLINK libspdk_ut.so 00:02:05.564 SYMLINK libspdk_log.so 00:02:05.564 SYMLINK libspdk_ut_mock.so 00:02:05.826 CC lib/ioat/ioat.o 00:02:05.826 CXX lib/trace_parser/trace.o 00:02:05.826 CC lib/dma/dma.o 00:02:05.826 CC lib/util/base64.o 00:02:05.826 CC lib/util/bit_array.o 00:02:05.826 CC lib/util/cpuset.o 00:02:05.826 CC lib/util/crc16.o 00:02:05.826 CC lib/util/crc32.o 00:02:05.826 CC lib/util/crc32c.o 00:02:05.826 CC lib/util/crc32_ieee.o 00:02:05.826 CC lib/util/crc64.o 00:02:05.826 CC lib/util/dif.o 00:02:06.087 CC lib/util/fd.o 00:02:06.087 CC lib/util/fd_group.o 00:02:06.087 CC lib/util/file.o 00:02:06.087 CC lib/util/hexlify.o 00:02:06.087 CC lib/util/iov.o 00:02:06.087 CC lib/util/math.o 00:02:06.087 CC lib/util/net.o 00:02:06.087 CC lib/util/pipe.o 00:02:06.087 CC lib/util/strerror_tls.o 00:02:06.087 CC lib/util/string.o 00:02:06.087 CC lib/util/uuid.o 00:02:06.087 CC lib/util/xor.o 00:02:06.087 CC lib/util/zipf.o 00:02:06.087 CC lib/vfio_user/host/vfio_user.o 00:02:06.087 CC lib/vfio_user/host/vfio_user_pci.o 00:02:06.087 LIB libspdk_dma.a 00:02:06.087 SO libspdk_dma.so.4.0 00:02:06.385 LIB libspdk_ioat.a 00:02:06.385 SYMLINK libspdk_dma.so 00:02:06.385 SO libspdk_ioat.so.7.0 00:02:06.385 SYMLINK libspdk_ioat.so 00:02:06.385 LIB libspdk_vfio_user.a 00:02:06.385 SO libspdk_vfio_user.so.5.0 00:02:06.385 LIB libspdk_util.a 00:02:06.656 SYMLINK libspdk_vfio_user.so 00:02:06.656 SO libspdk_util.so.9.1 00:02:06.656 SYMLINK libspdk_util.so 00:02:06.656 LIB libspdk_trace_parser.a 00:02:06.656 SO libspdk_trace_parser.so.5.0 00:02:06.918 SYMLINK libspdk_trace_parser.so 00:02:06.918 CC lib/vmd/vmd.o 00:02:06.918 CC lib/vmd/led.o 00:02:06.918 CC lib/rdma_utils/rdma_utils.o 00:02:06.918 CC lib/rdma_provider/common.o 00:02:06.918 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:06.918 CC lib/json/json_parse.o 00:02:06.918 CC lib/json/json_util.o 00:02:06.918 CC lib/json/json_write.o 00:02:06.918 CC lib/conf/conf.o 00:02:06.918 CC lib/idxd/idxd.o 00:02:06.918 CC lib/idxd/idxd_user.o 00:02:06.918 CC lib/idxd/idxd_kernel.o 00:02:06.918 CC lib/env_dpdk/env.o 00:02:06.918 CC lib/env_dpdk/memory.o 00:02:06.918 CC lib/env_dpdk/pci.o 00:02:06.918 CC lib/env_dpdk/init.o 00:02:06.918 CC lib/env_dpdk/threads.o 00:02:06.918 CC lib/env_dpdk/pci_ioat.o 00:02:06.918 CC lib/env_dpdk/pci_virtio.o 00:02:06.918 CC lib/env_dpdk/pci_vmd.o 00:02:07.180 CC lib/env_dpdk/pci_idxd.o 00:02:07.180 CC lib/env_dpdk/pci_event.o 00:02:07.180 CC lib/env_dpdk/sigbus_handler.o 00:02:07.180 CC lib/env_dpdk/pci_dpdk.o 00:02:07.180 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:07.180 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:07.180 LIB libspdk_rdma_provider.a 00:02:07.180 LIB libspdk_conf.a 00:02:07.441 SO libspdk_rdma_provider.so.6.0 00:02:07.441 LIB libspdk_rdma_utils.a 00:02:07.441 SO libspdk_conf.so.6.0 00:02:07.441 LIB libspdk_json.a 00:02:07.441 SO libspdk_rdma_utils.so.1.0 00:02:07.441 SYMLINK libspdk_rdma_provider.so 00:02:07.441 SO libspdk_json.so.6.0 00:02:07.441 SYMLINK libspdk_rdma_utils.so 00:02:07.441 SYMLINK libspdk_conf.so 00:02:07.441 SYMLINK libspdk_json.so 00:02:07.441 LIB libspdk_idxd.a 00:02:07.702 SO libspdk_idxd.so.12.0 00:02:07.702 LIB libspdk_vmd.a 00:02:07.702 SO libspdk_vmd.so.6.0 00:02:07.702 SYMLINK libspdk_idxd.so 00:02:07.702 SYMLINK libspdk_vmd.so 00:02:07.963 CC lib/jsonrpc/jsonrpc_server.o 00:02:07.963 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:07.963 CC lib/jsonrpc/jsonrpc_client.o 00:02:07.963 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:08.224 LIB libspdk_jsonrpc.a 00:02:08.224 SO libspdk_jsonrpc.so.6.0 00:02:08.224 SYMLINK libspdk_jsonrpc.so 00:02:08.224 LIB libspdk_env_dpdk.a 00:02:08.485 SO libspdk_env_dpdk.so.15.0 00:02:08.485 SYMLINK libspdk_env_dpdk.so 00:02:08.485 CC lib/rpc/rpc.o 00:02:08.744 LIB libspdk_rpc.a 00:02:08.744 SO libspdk_rpc.so.6.0 00:02:08.744 SYMLINK libspdk_rpc.so 00:02:09.316 CC lib/trace/trace.o 00:02:09.316 CC lib/trace/trace_flags.o 00:02:09.316 CC lib/keyring/keyring.o 00:02:09.316 CC lib/trace/trace_rpc.o 00:02:09.316 CC lib/notify/notify.o 00:02:09.316 CC lib/keyring/keyring_rpc.o 00:02:09.316 CC lib/notify/notify_rpc.o 00:02:09.316 LIB libspdk_notify.a 00:02:09.576 SO libspdk_notify.so.6.0 00:02:09.576 LIB libspdk_keyring.a 00:02:09.576 LIB libspdk_trace.a 00:02:09.576 SO libspdk_keyring.so.1.0 00:02:09.576 SYMLINK libspdk_notify.so 00:02:09.576 SO libspdk_trace.so.10.0 00:02:09.576 SYMLINK libspdk_keyring.so 00:02:09.576 SYMLINK libspdk_trace.so 00:02:09.837 CC lib/thread/thread.o 00:02:09.837 CC lib/thread/iobuf.o 00:02:09.837 CC lib/sock/sock_rpc.o 00:02:09.837 CC lib/sock/sock.o 00:02:10.408 LIB libspdk_sock.a 00:02:10.408 SO libspdk_sock.so.10.0 00:02:10.408 SYMLINK libspdk_sock.so 00:02:10.669 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:10.669 CC lib/nvme/nvme_ctrlr.o 00:02:10.669 CC lib/nvme/nvme_fabric.o 00:02:10.669 CC lib/nvme/nvme_ns_cmd.o 00:02:10.669 CC lib/nvme/nvme_ns.o 00:02:10.669 CC lib/nvme/nvme_pcie_common.o 00:02:10.669 CC lib/nvme/nvme_pcie.o 00:02:10.669 CC lib/nvme/nvme_qpair.o 00:02:10.669 CC lib/nvme/nvme.o 00:02:10.669 CC lib/nvme/nvme_quirks.o 00:02:10.669 CC lib/nvme/nvme_transport.o 00:02:10.669 CC lib/nvme/nvme_discovery.o 00:02:10.669 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:10.669 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:10.669 CC lib/nvme/nvme_tcp.o 00:02:10.669 CC lib/nvme/nvme_opal.o 00:02:10.669 CC lib/nvme/nvme_io_msg.o 00:02:10.669 CC lib/nvme/nvme_poll_group.o 00:02:10.669 CC lib/nvme/nvme_zns.o 00:02:10.669 CC lib/nvme/nvme_stubs.o 00:02:10.669 CC lib/nvme/nvme_auth.o 00:02:10.669 CC lib/nvme/nvme_cuse.o 00:02:10.669 CC lib/nvme/nvme_vfio_user.o 00:02:10.669 CC lib/nvme/nvme_rdma.o 00:02:11.241 LIB libspdk_thread.a 00:02:11.241 SO libspdk_thread.so.10.1 00:02:11.241 SYMLINK libspdk_thread.so 00:02:11.501 CC lib/vfu_tgt/tgt_endpoint.o 00:02:11.501 CC lib/vfu_tgt/tgt_rpc.o 00:02:11.501 CC lib/accel/accel.o 00:02:11.501 CC lib/accel/accel_rpc.o 00:02:11.501 CC lib/accel/accel_sw.o 00:02:11.501 CC lib/virtio/virtio.o 00:02:11.501 CC lib/virtio/virtio_pci.o 00:02:11.501 CC lib/virtio/virtio_vhost_user.o 00:02:11.501 CC lib/virtio/virtio_vfio_user.o 00:02:11.761 CC lib/init/json_config.o 00:02:11.761 CC lib/init/subsystem.o 00:02:11.761 CC lib/init/subsystem_rpc.o 00:02:11.761 CC lib/init/rpc.o 00:02:11.761 CC lib/blob/blobstore.o 00:02:11.761 CC lib/blob/request.o 00:02:11.761 CC lib/blob/zeroes.o 00:02:11.761 CC lib/blob/blob_bs_dev.o 00:02:11.761 LIB libspdk_init.a 00:02:12.021 LIB libspdk_vfu_tgt.a 00:02:12.021 LIB libspdk_virtio.a 00:02:12.021 SO libspdk_init.so.5.0 00:02:12.021 SO libspdk_vfu_tgt.so.3.0 00:02:12.021 SO libspdk_virtio.so.7.0 00:02:12.021 SYMLINK libspdk_init.so 00:02:12.021 SYMLINK libspdk_vfu_tgt.so 00:02:12.021 SYMLINK libspdk_virtio.so 00:02:12.281 CC lib/event/app.o 00:02:12.281 CC lib/event/log_rpc.o 00:02:12.281 CC lib/event/reactor.o 00:02:12.281 CC lib/event/app_rpc.o 00:02:12.281 CC lib/event/scheduler_static.o 00:02:12.541 LIB libspdk_accel.a 00:02:12.541 SO libspdk_accel.so.15.1 00:02:12.541 LIB libspdk_nvme.a 00:02:12.541 SYMLINK libspdk_accel.so 00:02:12.541 SO libspdk_nvme.so.13.1 00:02:12.802 LIB libspdk_event.a 00:02:12.802 SO libspdk_event.so.14.0 00:02:12.802 SYMLINK libspdk_event.so 00:02:12.802 CC lib/bdev/bdev.o 00:02:12.802 CC lib/bdev/bdev_rpc.o 00:02:12.802 CC lib/bdev/bdev_zone.o 00:02:12.802 CC lib/bdev/part.o 00:02:12.802 CC lib/bdev/scsi_nvme.o 00:02:13.063 SYMLINK libspdk_nvme.so 00:02:14.030 LIB libspdk_blob.a 00:02:14.290 SO libspdk_blob.so.11.0 00:02:14.290 SYMLINK libspdk_blob.so 00:02:14.551 CC lib/blobfs/blobfs.o 00:02:14.551 CC lib/lvol/lvol.o 00:02:14.551 CC lib/blobfs/tree.o 00:02:15.123 LIB libspdk_bdev.a 00:02:15.123 SO libspdk_bdev.so.15.1 00:02:15.383 SYMLINK libspdk_bdev.so 00:02:15.383 LIB libspdk_blobfs.a 00:02:15.383 SO libspdk_blobfs.so.10.0 00:02:15.383 LIB libspdk_lvol.a 00:02:15.383 SYMLINK libspdk_blobfs.so 00:02:15.383 SO libspdk_lvol.so.10.0 00:02:15.643 SYMLINK libspdk_lvol.so 00:02:15.643 CC lib/nbd/nbd.o 00:02:15.643 CC lib/nbd/nbd_rpc.o 00:02:15.643 CC lib/scsi/dev.o 00:02:15.643 CC lib/scsi/lun.o 00:02:15.643 CC lib/scsi/port.o 00:02:15.643 CC lib/scsi/scsi.o 00:02:15.643 CC lib/nvmf/ctrlr.o 00:02:15.643 CC lib/scsi/scsi_bdev.o 00:02:15.643 CC lib/scsi/scsi_pr.o 00:02:15.643 CC lib/ublk/ublk.o 00:02:15.643 CC lib/nvmf/ctrlr_discovery.o 00:02:15.643 CC lib/nvmf/ctrlr_bdev.o 00:02:15.643 CC lib/ublk/ublk_rpc.o 00:02:15.643 CC lib/scsi/scsi_rpc.o 00:02:15.643 CC lib/nvmf/subsystem.o 00:02:15.643 CC lib/scsi/task.o 00:02:15.643 CC lib/nvmf/nvmf.o 00:02:15.643 CC lib/nvmf/nvmf_rpc.o 00:02:15.643 CC lib/nvmf/transport.o 00:02:15.643 CC lib/ftl/ftl_core.o 00:02:15.643 CC lib/nvmf/tcp.o 00:02:15.643 CC lib/ftl/ftl_init.o 00:02:15.643 CC lib/nvmf/stubs.o 00:02:15.643 CC lib/ftl/ftl_layout.o 00:02:15.643 CC lib/nvmf/mdns_server.o 00:02:15.643 CC lib/ftl/ftl_debug.o 00:02:15.643 CC lib/nvmf/vfio_user.o 00:02:15.643 CC lib/ftl/ftl_io.o 00:02:15.643 CC lib/nvmf/rdma.o 00:02:15.643 CC lib/ftl/ftl_sb.o 00:02:15.643 CC lib/nvmf/auth.o 00:02:15.643 CC lib/ftl/ftl_l2p.o 00:02:15.643 CC lib/ftl/ftl_l2p_flat.o 00:02:15.643 CC lib/ftl/ftl_nv_cache.o 00:02:15.643 CC lib/ftl/ftl_band.o 00:02:15.643 CC lib/ftl/ftl_band_ops.o 00:02:15.643 CC lib/ftl/ftl_writer.o 00:02:15.643 CC lib/ftl/ftl_rq.o 00:02:15.643 CC lib/ftl/ftl_reloc.o 00:02:15.643 CC lib/ftl/ftl_l2p_cache.o 00:02:15.643 CC lib/ftl/ftl_p2l.o 00:02:15.643 CC lib/ftl/mngt/ftl_mngt.o 00:02:15.643 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:15.643 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:15.643 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:15.643 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:15.643 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:15.643 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:15.643 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:15.643 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:15.643 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:15.643 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:15.643 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:15.643 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:15.643 CC lib/ftl/utils/ftl_conf.o 00:02:15.643 CC lib/ftl/utils/ftl_mempool.o 00:02:15.643 CC lib/ftl/utils/ftl_md.o 00:02:15.643 CC lib/ftl/utils/ftl_bitmap.o 00:02:15.643 CC lib/ftl/utils/ftl_property.o 00:02:15.643 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:15.643 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:15.643 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:15.643 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:15.643 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:15.643 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:15.643 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:15.643 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:15.643 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:15.643 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:15.643 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:15.643 CC lib/ftl/base/ftl_base_bdev.o 00:02:15.643 CC lib/ftl/base/ftl_base_dev.o 00:02:15.643 CC lib/ftl/ftl_trace.o 00:02:16.211 LIB libspdk_nbd.a 00:02:16.211 SO libspdk_nbd.so.7.0 00:02:16.211 SYMLINK libspdk_nbd.so 00:02:16.211 LIB libspdk_scsi.a 00:02:16.211 SO libspdk_scsi.so.9.0 00:02:16.211 LIB libspdk_ublk.a 00:02:16.473 SO libspdk_ublk.so.3.0 00:02:16.473 SYMLINK libspdk_scsi.so 00:02:16.473 SYMLINK libspdk_ublk.so 00:02:16.734 LIB libspdk_ftl.a 00:02:16.734 CC lib/vhost/vhost.o 00:02:16.734 CC lib/iscsi/conn.o 00:02:16.734 CC lib/vhost/vhost_rpc.o 00:02:16.734 CC lib/iscsi/init_grp.o 00:02:16.734 CC lib/vhost/vhost_scsi.o 00:02:16.734 CC lib/iscsi/md5.o 00:02:16.734 CC lib/vhost/vhost_blk.o 00:02:16.734 CC lib/iscsi/iscsi.o 00:02:16.734 CC lib/vhost/rte_vhost_user.o 00:02:16.734 CC lib/iscsi/param.o 00:02:16.734 CC lib/iscsi/portal_grp.o 00:02:16.734 CC lib/iscsi/tgt_node.o 00:02:16.734 CC lib/iscsi/iscsi_subsystem.o 00:02:16.734 CC lib/iscsi/iscsi_rpc.o 00:02:16.734 CC lib/iscsi/task.o 00:02:16.734 SO libspdk_ftl.so.9.0 00:02:17.305 SYMLINK libspdk_ftl.so 00:02:17.565 LIB libspdk_nvmf.a 00:02:17.565 SO libspdk_nvmf.so.19.0 00:02:17.565 SYMLINK libspdk_nvmf.so 00:02:17.565 LIB libspdk_vhost.a 00:02:17.825 SO libspdk_vhost.so.8.0 00:02:17.825 SYMLINK libspdk_vhost.so 00:02:17.825 LIB libspdk_iscsi.a 00:02:17.825 SO libspdk_iscsi.so.8.0 00:02:18.085 SYMLINK libspdk_iscsi.so 00:02:18.657 CC module/vfu_device/vfu_virtio_blk.o 00:02:18.657 CC module/vfu_device/vfu_virtio.o 00:02:18.657 CC module/vfu_device/vfu_virtio_scsi.o 00:02:18.657 CC module/vfu_device/vfu_virtio_rpc.o 00:02:18.657 CC module/env_dpdk/env_dpdk_rpc.o 00:02:18.657 CC module/sock/posix/posix.o 00:02:18.657 CC module/accel/iaa/accel_iaa.o 00:02:18.657 CC module/accel/iaa/accel_iaa_rpc.o 00:02:18.657 CC module/blob/bdev/blob_bdev.o 00:02:18.657 CC module/accel/error/accel_error_rpc.o 00:02:18.657 CC module/accel/error/accel_error.o 00:02:18.657 LIB libspdk_env_dpdk_rpc.a 00:02:18.657 CC module/accel/dsa/accel_dsa.o 00:02:18.657 CC module/accel/dsa/accel_dsa_rpc.o 00:02:18.657 CC module/keyring/file/keyring.o 00:02:18.657 CC module/keyring/file/keyring_rpc.o 00:02:18.657 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:18.657 CC module/accel/ioat/accel_ioat.o 00:02:18.657 CC module/accel/ioat/accel_ioat_rpc.o 00:02:18.657 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:18.657 CC module/scheduler/gscheduler/gscheduler.o 00:02:18.657 CC module/keyring/linux/keyring.o 00:02:18.918 CC module/keyring/linux/keyring_rpc.o 00:02:18.918 SO libspdk_env_dpdk_rpc.so.6.0 00:02:18.918 SYMLINK libspdk_env_dpdk_rpc.so 00:02:18.918 LIB libspdk_scheduler_dpdk_governor.a 00:02:18.918 LIB libspdk_keyring_linux.a 00:02:18.918 LIB libspdk_keyring_file.a 00:02:18.918 LIB libspdk_accel_error.a 00:02:18.918 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:18.918 LIB libspdk_scheduler_gscheduler.a 00:02:18.918 LIB libspdk_accel_ioat.a 00:02:18.918 SO libspdk_keyring_linux.so.1.0 00:02:18.918 SO libspdk_keyring_file.so.1.0 00:02:18.918 LIB libspdk_accel_iaa.a 00:02:18.918 SO libspdk_accel_error.so.2.0 00:02:18.918 LIB libspdk_scheduler_dynamic.a 00:02:18.918 SO libspdk_scheduler_gscheduler.so.4.0 00:02:18.918 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:18.918 SO libspdk_accel_ioat.so.6.0 00:02:18.918 SO libspdk_accel_iaa.so.3.0 00:02:18.918 LIB libspdk_accel_dsa.a 00:02:18.918 LIB libspdk_blob_bdev.a 00:02:18.918 SO libspdk_scheduler_dynamic.so.4.0 00:02:18.918 SYMLINK libspdk_keyring_linux.so 00:02:19.180 SYMLINK libspdk_keyring_file.so 00:02:19.180 SYMLINK libspdk_accel_error.so 00:02:19.180 SYMLINK libspdk_scheduler_gscheduler.so 00:02:19.180 SO libspdk_blob_bdev.so.11.0 00:02:19.180 SO libspdk_accel_dsa.so.5.0 00:02:19.180 SYMLINK libspdk_accel_ioat.so 00:02:19.180 SYMLINK libspdk_accel_iaa.so 00:02:19.180 SYMLINK libspdk_scheduler_dynamic.so 00:02:19.180 LIB libspdk_vfu_device.a 00:02:19.180 SYMLINK libspdk_blob_bdev.so 00:02:19.180 SYMLINK libspdk_accel_dsa.so 00:02:19.180 SO libspdk_vfu_device.so.3.0 00:02:19.180 SYMLINK libspdk_vfu_device.so 00:02:19.442 LIB libspdk_sock_posix.a 00:02:19.442 SO libspdk_sock_posix.so.6.0 00:02:19.442 SYMLINK libspdk_sock_posix.so 00:02:19.703 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:19.703 CC module/bdev/malloc/bdev_malloc.o 00:02:19.703 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:19.703 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:19.703 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:19.703 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:19.703 CC module/bdev/ftl/bdev_ftl.o 00:02:19.703 CC module/bdev/lvol/vbdev_lvol.o 00:02:19.703 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:19.703 CC module/bdev/nvme/bdev_nvme.o 00:02:19.703 CC module/bdev/error/vbdev_error.o 00:02:19.703 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:19.703 CC module/bdev/error/vbdev_error_rpc.o 00:02:19.703 CC module/bdev/nvme/nvme_rpc.o 00:02:19.703 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:19.703 CC module/bdev/gpt/gpt.o 00:02:19.703 CC module/bdev/nvme/bdev_mdns_client.o 00:02:19.703 CC module/bdev/gpt/vbdev_gpt.o 00:02:19.703 CC module/bdev/nvme/vbdev_opal.o 00:02:19.703 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:19.703 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:19.703 CC module/bdev/raid/bdev_raid.o 00:02:19.703 CC module/bdev/aio/bdev_aio.o 00:02:19.703 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:19.703 CC module/bdev/null/bdev_null.o 00:02:19.703 CC module/bdev/raid/bdev_raid_sb.o 00:02:19.703 CC module/bdev/null/bdev_null_rpc.o 00:02:19.703 CC module/bdev/raid/bdev_raid_rpc.o 00:02:19.703 CC module/bdev/iscsi/bdev_iscsi.o 00:02:19.703 CC module/bdev/split/vbdev_split.o 00:02:19.703 CC module/bdev/aio/bdev_aio_rpc.o 00:02:19.703 CC module/bdev/delay/vbdev_delay.o 00:02:19.703 CC module/bdev/passthru/vbdev_passthru.o 00:02:19.703 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:19.703 CC module/bdev/split/vbdev_split_rpc.o 00:02:19.703 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:19.703 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:19.703 CC module/bdev/raid/raid1.o 00:02:19.703 CC module/bdev/raid/raid0.o 00:02:19.703 CC module/bdev/raid/concat.o 00:02:19.703 CC module/blobfs/bdev/blobfs_bdev.o 00:02:19.703 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:19.963 LIB libspdk_blobfs_bdev.a 00:02:19.963 SO libspdk_blobfs_bdev.so.6.0 00:02:19.963 LIB libspdk_bdev_error.a 00:02:19.963 LIB libspdk_bdev_split.a 00:02:19.963 LIB libspdk_bdev_null.a 00:02:19.963 SO libspdk_bdev_split.so.6.0 00:02:19.963 SO libspdk_bdev_error.so.6.0 00:02:19.963 LIB libspdk_bdev_gpt.a 00:02:19.963 LIB libspdk_bdev_passthru.a 00:02:19.963 SYMLINK libspdk_blobfs_bdev.so 00:02:19.963 SO libspdk_bdev_null.so.6.0 00:02:19.963 LIB libspdk_bdev_ftl.a 00:02:19.963 LIB libspdk_bdev_aio.a 00:02:19.963 SO libspdk_bdev_gpt.so.6.0 00:02:19.963 SO libspdk_bdev_passthru.so.6.0 00:02:19.963 SYMLINK libspdk_bdev_split.so 00:02:19.963 LIB libspdk_bdev_zone_block.a 00:02:19.963 LIB libspdk_bdev_iscsi.a 00:02:19.963 SYMLINK libspdk_bdev_error.so 00:02:20.224 SO libspdk_bdev_ftl.so.6.0 00:02:20.224 SO libspdk_bdev_aio.so.6.0 00:02:20.224 LIB libspdk_bdev_malloc.a 00:02:20.224 SO libspdk_bdev_iscsi.so.6.0 00:02:20.224 SYMLINK libspdk_bdev_null.so 00:02:20.224 SO libspdk_bdev_zone_block.so.6.0 00:02:20.224 LIB libspdk_bdev_delay.a 00:02:20.224 SYMLINK libspdk_bdev_gpt.so 00:02:20.224 SYMLINK libspdk_bdev_passthru.so 00:02:20.224 SO libspdk_bdev_malloc.so.6.0 00:02:20.224 SYMLINK libspdk_bdev_aio.so 00:02:20.224 SYMLINK libspdk_bdev_ftl.so 00:02:20.224 SO libspdk_bdev_delay.so.6.0 00:02:20.224 LIB libspdk_bdev_lvol.a 00:02:20.224 SYMLINK libspdk_bdev_iscsi.so 00:02:20.224 LIB libspdk_bdev_virtio.a 00:02:20.224 SYMLINK libspdk_bdev_zone_block.so 00:02:20.224 SO libspdk_bdev_virtio.so.6.0 00:02:20.224 SO libspdk_bdev_lvol.so.6.0 00:02:20.224 SYMLINK libspdk_bdev_malloc.so 00:02:20.224 SYMLINK libspdk_bdev_delay.so 00:02:20.224 SYMLINK libspdk_bdev_lvol.so 00:02:20.224 SYMLINK libspdk_bdev_virtio.so 00:02:20.485 LIB libspdk_bdev_raid.a 00:02:20.759 SO libspdk_bdev_raid.so.6.0 00:02:20.759 SYMLINK libspdk_bdev_raid.so 00:02:21.756 LIB libspdk_bdev_nvme.a 00:02:21.756 SO libspdk_bdev_nvme.so.7.0 00:02:21.756 SYMLINK libspdk_bdev_nvme.so 00:02:22.700 CC module/event/subsystems/scheduler/scheduler.o 00:02:22.700 CC module/event/subsystems/iobuf/iobuf.o 00:02:22.700 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:22.700 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:22.700 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:22.700 CC module/event/subsystems/vmd/vmd.o 00:02:22.700 CC module/event/subsystems/keyring/keyring.o 00:02:22.700 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:22.700 CC module/event/subsystems/sock/sock.o 00:02:22.700 LIB libspdk_event_iobuf.a 00:02:22.700 LIB libspdk_event_scheduler.a 00:02:22.700 LIB libspdk_event_vfu_tgt.a 00:02:22.700 LIB libspdk_event_keyring.a 00:02:22.700 LIB libspdk_event_vhost_blk.a 00:02:22.700 LIB libspdk_event_vmd.a 00:02:22.700 LIB libspdk_event_sock.a 00:02:22.700 SO libspdk_event_iobuf.so.3.0 00:02:22.700 SO libspdk_event_scheduler.so.4.0 00:02:22.700 SO libspdk_event_vfu_tgt.so.3.0 00:02:22.700 SO libspdk_event_vhost_blk.so.3.0 00:02:22.700 SO libspdk_event_keyring.so.1.0 00:02:22.700 SO libspdk_event_vmd.so.6.0 00:02:22.700 SO libspdk_event_sock.so.5.0 00:02:22.700 SYMLINK libspdk_event_vfu_tgt.so 00:02:22.961 SYMLINK libspdk_event_vhost_blk.so 00:02:22.961 SYMLINK libspdk_event_scheduler.so 00:02:22.961 SYMLINK libspdk_event_iobuf.so 00:02:22.961 SYMLINK libspdk_event_keyring.so 00:02:22.961 SYMLINK libspdk_event_vmd.so 00:02:22.961 SYMLINK libspdk_event_sock.so 00:02:23.222 CC module/event/subsystems/accel/accel.o 00:02:23.222 LIB libspdk_event_accel.a 00:02:23.483 SO libspdk_event_accel.so.6.0 00:02:23.483 SYMLINK libspdk_event_accel.so 00:02:23.743 CC module/event/subsystems/bdev/bdev.o 00:02:24.004 LIB libspdk_event_bdev.a 00:02:24.004 SO libspdk_event_bdev.so.6.0 00:02:24.004 SYMLINK libspdk_event_bdev.so 00:02:24.577 CC module/event/subsystems/nbd/nbd.o 00:02:24.577 CC module/event/subsystems/scsi/scsi.o 00:02:24.577 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:24.577 CC module/event/subsystems/ublk/ublk.o 00:02:24.577 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:24.577 LIB libspdk_event_nbd.a 00:02:24.577 LIB libspdk_event_ublk.a 00:02:24.577 SO libspdk_event_ublk.so.3.0 00:02:24.577 LIB libspdk_event_scsi.a 00:02:24.577 SO libspdk_event_nbd.so.6.0 00:02:24.577 SO libspdk_event_scsi.so.6.0 00:02:24.577 SYMLINK libspdk_event_ublk.so 00:02:24.577 LIB libspdk_event_nvmf.a 00:02:24.577 SYMLINK libspdk_event_nbd.so 00:02:24.838 SO libspdk_event_nvmf.so.6.0 00:02:24.838 SYMLINK libspdk_event_scsi.so 00:02:24.838 SYMLINK libspdk_event_nvmf.so 00:02:25.099 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:25.099 CC module/event/subsystems/iscsi/iscsi.o 00:02:25.099 LIB libspdk_event_vhost_scsi.a 00:02:25.359 LIB libspdk_event_iscsi.a 00:02:25.359 SO libspdk_event_vhost_scsi.so.3.0 00:02:25.359 SO libspdk_event_iscsi.so.6.0 00:02:25.359 SYMLINK libspdk_event_vhost_scsi.so 00:02:25.359 SYMLINK libspdk_event_iscsi.so 00:02:25.619 SO libspdk.so.6.0 00:02:25.619 SYMLINK libspdk.so 00:02:25.881 CXX app/trace/trace.o 00:02:25.881 CC app/spdk_nvme_identify/identify.o 00:02:25.881 TEST_HEADER include/spdk/accel.h 00:02:25.881 CC app/trace_record/trace_record.o 00:02:25.881 TEST_HEADER include/spdk/assert.h 00:02:25.881 TEST_HEADER include/spdk/accel_module.h 00:02:25.881 CC app/spdk_lspci/spdk_lspci.o 00:02:25.881 TEST_HEADER include/spdk/barrier.h 00:02:25.881 CC test/rpc_client/rpc_client_test.o 00:02:25.881 TEST_HEADER include/spdk/base64.h 00:02:25.881 TEST_HEADER include/spdk/bdev.h 00:02:25.881 CC app/spdk_top/spdk_top.o 00:02:25.881 TEST_HEADER include/spdk/bdev_module.h 00:02:25.881 TEST_HEADER include/spdk/bdev_zone.h 00:02:25.881 TEST_HEADER include/spdk/bit_array.h 00:02:25.881 TEST_HEADER include/spdk/blob_bdev.h 00:02:25.881 TEST_HEADER include/spdk/bit_pool.h 00:02:25.881 CC app/spdk_nvme_perf/perf.o 00:02:25.881 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:25.881 TEST_HEADER include/spdk/blobfs.h 00:02:25.881 TEST_HEADER include/spdk/blob.h 00:02:25.881 CC app/spdk_nvme_discover/discovery_aer.o 00:02:25.881 TEST_HEADER include/spdk/conf.h 00:02:25.881 TEST_HEADER include/spdk/config.h 00:02:25.881 TEST_HEADER include/spdk/cpuset.h 00:02:25.881 TEST_HEADER include/spdk/crc16.h 00:02:25.881 TEST_HEADER include/spdk/crc64.h 00:02:25.881 TEST_HEADER include/spdk/crc32.h 00:02:25.881 TEST_HEADER include/spdk/dif.h 00:02:25.881 TEST_HEADER include/spdk/dma.h 00:02:25.881 TEST_HEADER include/spdk/endian.h 00:02:25.881 TEST_HEADER include/spdk/env_dpdk.h 00:02:25.881 TEST_HEADER include/spdk/env.h 00:02:25.881 TEST_HEADER include/spdk/fd_group.h 00:02:25.881 TEST_HEADER include/spdk/event.h 00:02:25.881 TEST_HEADER include/spdk/fd.h 00:02:25.881 TEST_HEADER include/spdk/file.h 00:02:25.881 TEST_HEADER include/spdk/ftl.h 00:02:25.881 CC app/spdk_dd/spdk_dd.o 00:02:25.881 TEST_HEADER include/spdk/gpt_spec.h 00:02:25.881 TEST_HEADER include/spdk/hexlify.h 00:02:25.881 TEST_HEADER include/spdk/histogram_data.h 00:02:25.881 TEST_HEADER include/spdk/idxd.h 00:02:25.881 TEST_HEADER include/spdk/idxd_spec.h 00:02:25.881 TEST_HEADER include/spdk/init.h 00:02:25.881 TEST_HEADER include/spdk/ioat.h 00:02:25.881 TEST_HEADER include/spdk/iscsi_spec.h 00:02:25.881 TEST_HEADER include/spdk/ioat_spec.h 00:02:25.881 TEST_HEADER include/spdk/json.h 00:02:25.881 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:25.881 TEST_HEADER include/spdk/keyring.h 00:02:25.881 TEST_HEADER include/spdk/jsonrpc.h 00:02:25.881 TEST_HEADER include/spdk/keyring_module.h 00:02:25.881 TEST_HEADER include/spdk/likely.h 00:02:25.881 TEST_HEADER include/spdk/log.h 00:02:25.881 TEST_HEADER include/spdk/lvol.h 00:02:25.881 TEST_HEADER include/spdk/memory.h 00:02:25.881 CC app/iscsi_tgt/iscsi_tgt.o 00:02:25.881 TEST_HEADER include/spdk/mmio.h 00:02:25.881 TEST_HEADER include/spdk/net.h 00:02:25.881 TEST_HEADER include/spdk/nbd.h 00:02:25.881 CC app/nvmf_tgt/nvmf_main.o 00:02:25.881 CC app/spdk_tgt/spdk_tgt.o 00:02:25.881 TEST_HEADER include/spdk/notify.h 00:02:25.881 TEST_HEADER include/spdk/nvme.h 00:02:25.881 TEST_HEADER include/spdk/nvme_intel.h 00:02:26.141 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:26.141 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:26.141 TEST_HEADER include/spdk/nvme_spec.h 00:02:26.141 TEST_HEADER include/spdk/nvme_zns.h 00:02:26.141 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:26.141 TEST_HEADER include/spdk/nvmf.h 00:02:26.141 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:26.141 TEST_HEADER include/spdk/nvmf_transport.h 00:02:26.141 TEST_HEADER include/spdk/nvmf_spec.h 00:02:26.141 TEST_HEADER include/spdk/opal.h 00:02:26.141 TEST_HEADER include/spdk/opal_spec.h 00:02:26.141 TEST_HEADER include/spdk/pci_ids.h 00:02:26.141 TEST_HEADER include/spdk/pipe.h 00:02:26.141 TEST_HEADER include/spdk/reduce.h 00:02:26.141 TEST_HEADER include/spdk/queue.h 00:02:26.141 TEST_HEADER include/spdk/rpc.h 00:02:26.141 TEST_HEADER include/spdk/scsi.h 00:02:26.141 TEST_HEADER include/spdk/scsi_spec.h 00:02:26.141 TEST_HEADER include/spdk/scheduler.h 00:02:26.141 TEST_HEADER include/spdk/sock.h 00:02:26.141 TEST_HEADER include/spdk/stdinc.h 00:02:26.141 TEST_HEADER include/spdk/string.h 00:02:26.141 TEST_HEADER include/spdk/thread.h 00:02:26.141 TEST_HEADER include/spdk/trace.h 00:02:26.141 TEST_HEADER include/spdk/trace_parser.h 00:02:26.141 TEST_HEADER include/spdk/tree.h 00:02:26.141 TEST_HEADER include/spdk/ublk.h 00:02:26.141 TEST_HEADER include/spdk/util.h 00:02:26.141 TEST_HEADER include/spdk/version.h 00:02:26.141 TEST_HEADER include/spdk/uuid.h 00:02:26.141 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:26.141 TEST_HEADER include/spdk/vhost.h 00:02:26.141 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:26.141 TEST_HEADER include/spdk/vmd.h 00:02:26.141 TEST_HEADER include/spdk/xor.h 00:02:26.141 TEST_HEADER include/spdk/zipf.h 00:02:26.141 CXX test/cpp_headers/accel.o 00:02:26.141 CXX test/cpp_headers/accel_module.o 00:02:26.141 CXX test/cpp_headers/assert.o 00:02:26.141 CXX test/cpp_headers/barrier.o 00:02:26.141 CXX test/cpp_headers/base64.o 00:02:26.141 CXX test/cpp_headers/bdev_module.o 00:02:26.141 CXX test/cpp_headers/bdev.o 00:02:26.141 CXX test/cpp_headers/bdev_zone.o 00:02:26.141 CXX test/cpp_headers/bit_array.o 00:02:26.141 CXX test/cpp_headers/bit_pool.o 00:02:26.141 CXX test/cpp_headers/blob_bdev.o 00:02:26.141 CXX test/cpp_headers/blobfs.o 00:02:26.141 CXX test/cpp_headers/config.o 00:02:26.141 CXX test/cpp_headers/blob.o 00:02:26.141 CXX test/cpp_headers/blobfs_bdev.o 00:02:26.141 CXX test/cpp_headers/conf.o 00:02:26.141 CXX test/cpp_headers/cpuset.o 00:02:26.141 CXX test/cpp_headers/crc16.o 00:02:26.141 CXX test/cpp_headers/crc32.o 00:02:26.141 CXX test/cpp_headers/crc64.o 00:02:26.141 CXX test/cpp_headers/dif.o 00:02:26.141 CXX test/cpp_headers/env_dpdk.o 00:02:26.142 CXX test/cpp_headers/endian.o 00:02:26.142 CXX test/cpp_headers/env.o 00:02:26.142 CXX test/cpp_headers/dma.o 00:02:26.142 CXX test/cpp_headers/event.o 00:02:26.142 CXX test/cpp_headers/fd_group.o 00:02:26.142 CXX test/cpp_headers/fd.o 00:02:26.142 CXX test/cpp_headers/file.o 00:02:26.142 CXX test/cpp_headers/ftl.o 00:02:26.142 CXX test/cpp_headers/hexlify.o 00:02:26.142 CXX test/cpp_headers/gpt_spec.o 00:02:26.142 CXX test/cpp_headers/histogram_data.o 00:02:26.142 CXX test/cpp_headers/idxd_spec.o 00:02:26.142 CXX test/cpp_headers/idxd.o 00:02:26.142 CXX test/cpp_headers/ioat.o 00:02:26.142 CXX test/cpp_headers/init.o 00:02:26.142 CXX test/cpp_headers/iscsi_spec.o 00:02:26.142 CXX test/cpp_headers/json.o 00:02:26.142 CXX test/cpp_headers/ioat_spec.o 00:02:26.142 CXX test/cpp_headers/jsonrpc.o 00:02:26.142 CXX test/cpp_headers/keyring.o 00:02:26.142 CXX test/cpp_headers/keyring_module.o 00:02:26.142 CXX test/cpp_headers/log.o 00:02:26.142 CXX test/cpp_headers/likely.o 00:02:26.142 CXX test/cpp_headers/memory.o 00:02:26.142 CXX test/cpp_headers/mmio.o 00:02:26.142 CXX test/cpp_headers/lvol.o 00:02:26.142 CXX test/cpp_headers/notify.o 00:02:26.142 CXX test/cpp_headers/nbd.o 00:02:26.142 CXX test/cpp_headers/net.o 00:02:26.142 CXX test/cpp_headers/nvme.o 00:02:26.142 CXX test/cpp_headers/nvme_ocssd.o 00:02:26.142 CXX test/cpp_headers/nvme_intel.o 00:02:26.142 CXX test/cpp_headers/nvme_spec.o 00:02:26.142 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:26.142 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:26.142 CXX test/cpp_headers/nvmf.o 00:02:26.142 CXX test/cpp_headers/nvme_zns.o 00:02:26.142 CXX test/cpp_headers/nvmf_cmd.o 00:02:26.142 CXX test/cpp_headers/nvmf_spec.o 00:02:26.142 CXX test/cpp_headers/opal_spec.o 00:02:26.142 CXX test/cpp_headers/opal.o 00:02:26.142 CXX test/cpp_headers/pci_ids.o 00:02:26.142 CXX test/cpp_headers/nvmf_transport.o 00:02:26.142 CXX test/cpp_headers/pipe.o 00:02:26.142 CXX test/cpp_headers/reduce.o 00:02:26.142 CXX test/cpp_headers/queue.o 00:02:26.142 CXX test/cpp_headers/scsi_spec.o 00:02:26.142 CXX test/cpp_headers/rpc.o 00:02:26.142 CXX test/cpp_headers/scsi.o 00:02:26.142 CXX test/cpp_headers/scheduler.o 00:02:26.142 CC examples/util/zipf/zipf.o 00:02:26.142 CXX test/cpp_headers/sock.o 00:02:26.142 CXX test/cpp_headers/string.o 00:02:26.142 CC test/thread/poller_perf/poller_perf.o 00:02:26.142 CC test/env/vtophys/vtophys.o 00:02:26.142 CXX test/cpp_headers/thread.o 00:02:26.142 CXX test/cpp_headers/stdinc.o 00:02:26.142 CXX test/cpp_headers/trace.o 00:02:26.142 CXX test/cpp_headers/tree.o 00:02:26.142 CXX test/cpp_headers/trace_parser.o 00:02:26.142 CXX test/cpp_headers/ublk.o 00:02:26.142 CC test/app/histogram_perf/histogram_perf.o 00:02:26.142 CXX test/cpp_headers/util.o 00:02:26.142 CXX test/cpp_headers/version.o 00:02:26.142 CXX test/cpp_headers/uuid.o 00:02:26.142 CXX test/cpp_headers/vfio_user_spec.o 00:02:26.142 CXX test/cpp_headers/vfio_user_pci.o 00:02:26.142 CXX test/cpp_headers/vhost.o 00:02:26.142 CXX test/cpp_headers/vmd.o 00:02:26.142 CXX test/cpp_headers/xor.o 00:02:26.142 CXX test/cpp_headers/zipf.o 00:02:26.142 CC test/env/memory/memory_ut.o 00:02:26.142 LINK spdk_lspci 00:02:26.142 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:26.142 CC examples/ioat/perf/perf.o 00:02:26.142 CC test/app/stub/stub.o 00:02:26.142 CC test/app/jsoncat/jsoncat.o 00:02:26.142 CC examples/ioat/verify/verify.o 00:02:26.142 CC test/env/pci/pci_ut.o 00:02:26.142 CC app/fio/nvme/fio_plugin.o 00:02:26.402 CC test/app/bdev_svc/bdev_svc.o 00:02:26.402 CC app/fio/bdev/fio_plugin.o 00:02:26.402 CC test/dma/test_dma/test_dma.o 00:02:26.402 LINK spdk_nvme_discover 00:02:26.402 LINK rpc_client_test 00:02:26.402 LINK interrupt_tgt 00:02:26.402 LINK nvmf_tgt 00:02:26.402 LINK iscsi_tgt 00:02:26.660 LINK spdk_trace_record 00:02:26.660 LINK spdk_tgt 00:02:26.660 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:26.660 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:26.660 CC test/env/mem_callbacks/mem_callbacks.o 00:02:26.660 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:26.660 LINK spdk_dd 00:02:26.660 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:26.660 LINK jsoncat 00:02:26.660 LINK spdk_trace 00:02:26.919 LINK stub 00:02:26.919 LINK poller_perf 00:02:26.919 LINK zipf 00:02:26.919 LINK bdev_svc 00:02:26.919 LINK vtophys 00:02:26.919 LINK histogram_perf 00:02:26.919 LINK ioat_perf 00:02:26.919 LINK env_dpdk_post_init 00:02:26.919 LINK verify 00:02:27.178 LINK test_dma 00:02:27.178 CC app/vhost/vhost.o 00:02:27.178 LINK pci_ut 00:02:27.178 LINK spdk_nvme_perf 00:02:27.178 LINK nvme_fuzz 00:02:27.178 LINK spdk_bdev 00:02:27.178 LINK spdk_nvme_identify 00:02:27.178 LINK vhost_fuzz 00:02:27.178 LINK spdk_nvme 00:02:27.436 CC examples/vmd/lsvmd/lsvmd.o 00:02:27.436 CC examples/sock/hello_world/hello_sock.o 00:02:27.436 CC examples/vmd/led/led.o 00:02:27.436 CC test/event/reactor_perf/reactor_perf.o 00:02:27.436 CC examples/idxd/perf/perf.o 00:02:27.436 CC test/event/event_perf/event_perf.o 00:02:27.436 CC test/event/reactor/reactor.o 00:02:27.436 LINK vhost 00:02:27.436 LINK mem_callbacks 00:02:27.436 CC examples/thread/thread/thread_ex.o 00:02:27.436 CC test/event/app_repeat/app_repeat.o 00:02:27.436 CC test/event/scheduler/scheduler.o 00:02:27.436 LINK spdk_top 00:02:27.436 LINK lsvmd 00:02:27.436 LINK led 00:02:27.436 LINK reactor_perf 00:02:27.436 LINK reactor 00:02:27.436 LINK event_perf 00:02:27.694 CC test/nvme/startup/startup.o 00:02:27.694 LINK app_repeat 00:02:27.694 CC test/nvme/reserve/reserve.o 00:02:27.694 CC test/nvme/aer/aer.o 00:02:27.694 CC test/nvme/connect_stress/connect_stress.o 00:02:27.695 CC test/nvme/sgl/sgl.o 00:02:27.695 CC test/nvme/simple_copy/simple_copy.o 00:02:27.695 CC test/nvme/reset/reset.o 00:02:27.695 CC test/nvme/e2edp/nvme_dp.o 00:02:27.695 CC test/nvme/overhead/overhead.o 00:02:27.695 CC test/nvme/fused_ordering/fused_ordering.o 00:02:27.695 LINK hello_sock 00:02:27.695 CC test/nvme/boot_partition/boot_partition.o 00:02:27.695 CC test/nvme/cuse/cuse.o 00:02:27.695 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:27.695 CC test/nvme/fdp/fdp.o 00:02:27.695 CC test/nvme/err_injection/err_injection.o 00:02:27.695 CC test/nvme/compliance/nvme_compliance.o 00:02:27.695 CC test/blobfs/mkfs/mkfs.o 00:02:27.695 CC test/accel/dif/dif.o 00:02:27.695 LINK thread 00:02:27.695 LINK scheduler 00:02:27.695 LINK idxd_perf 00:02:27.695 LINK memory_ut 00:02:27.695 CC test/lvol/esnap/esnap.o 00:02:27.695 LINK startup 00:02:27.695 LINK boot_partition 00:02:27.695 LINK connect_stress 00:02:27.695 LINK reserve 00:02:27.695 LINK err_injection 00:02:27.954 LINK doorbell_aers 00:02:27.954 LINK simple_copy 00:02:27.954 LINK mkfs 00:02:27.954 LINK fused_ordering 00:02:27.954 LINK reset 00:02:27.954 LINK aer 00:02:27.954 LINK overhead 00:02:27.954 LINK nvme_dp 00:02:27.954 LINK sgl 00:02:27.954 LINK nvme_compliance 00:02:27.954 LINK fdp 00:02:28.213 CC examples/nvme/reconnect/reconnect.o 00:02:28.213 LINK dif 00:02:28.213 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:28.213 CC examples/nvme/hotplug/hotplug.o 00:02:28.213 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:28.213 CC examples/nvme/arbitration/arbitration.o 00:02:28.213 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:28.213 CC examples/nvme/hello_world/hello_world.o 00:02:28.213 CC examples/nvme/abort/abort.o 00:02:28.213 LINK iscsi_fuzz 00:02:28.213 CC examples/accel/perf/accel_perf.o 00:02:28.213 CC examples/blob/cli/blobcli.o 00:02:28.213 CC examples/blob/hello_world/hello_blob.o 00:02:28.213 LINK pmr_persistence 00:02:28.213 LINK cmb_copy 00:02:28.213 LINK hello_world 00:02:28.472 LINK hotplug 00:02:28.472 LINK reconnect 00:02:28.472 LINK arbitration 00:02:28.472 LINK abort 00:02:28.472 LINK nvme_manage 00:02:28.472 LINK hello_blob 00:02:28.731 LINK accel_perf 00:02:28.731 CC test/bdev/bdevio/bdevio.o 00:02:28.731 LINK blobcli 00:02:28.732 LINK cuse 00:02:28.991 LINK bdevio 00:02:29.252 CC examples/bdev/bdevperf/bdevperf.o 00:02:29.252 CC examples/bdev/hello_world/hello_bdev.o 00:02:29.512 LINK hello_bdev 00:02:29.774 LINK bdevperf 00:02:30.718 CC examples/nvmf/nvmf/nvmf.o 00:02:30.718 LINK nvmf 00:02:32.105 LINK esnap 00:02:32.367 00:02:32.367 real 0m51.234s 00:02:32.367 user 6m32.316s 00:02:32.367 sys 4m34.494s 00:02:32.367 21:18:22 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:32.367 21:18:22 make -- common/autotest_common.sh@10 -- $ set +x 00:02:32.367 ************************************ 00:02:32.367 END TEST make 00:02:32.367 ************************************ 00:02:32.367 21:18:22 -- common/autotest_common.sh@1142 -- $ return 0 00:02:32.367 21:18:22 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:32.367 21:18:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:32.367 21:18:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:32.367 21:18:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.367 21:18:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:32.367 21:18:22 -- pm/common@44 -- $ pid=1836415 00:02:32.367 21:18:22 -- pm/common@50 -- $ kill -TERM 1836415 00:02:32.367 21:18:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.367 21:18:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:32.367 21:18:22 -- pm/common@44 -- $ pid=1836416 00:02:32.367 21:18:22 -- pm/common@50 -- $ kill -TERM 1836416 00:02:32.367 21:18:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.367 21:18:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:32.367 21:18:22 -- pm/common@44 -- $ pid=1836418 00:02:32.367 21:18:22 -- pm/common@50 -- $ kill -TERM 1836418 00:02:32.367 21:18:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.367 21:18:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:32.367 21:18:22 -- pm/common@44 -- $ pid=1836444 00:02:32.367 21:18:22 -- pm/common@50 -- $ sudo -E kill -TERM 1836444 00:02:32.629 21:18:22 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:32.629 21:18:22 -- nvmf/common.sh@7 -- # uname -s 00:02:32.629 21:18:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:32.629 21:18:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:32.629 21:18:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:32.629 21:18:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:32.629 21:18:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:32.629 21:18:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:32.629 21:18:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:32.629 21:18:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:32.629 21:18:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:32.629 21:18:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:32.629 21:18:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:32.629 21:18:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:32.629 21:18:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:32.629 21:18:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:32.629 21:18:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:32.629 21:18:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:32.629 21:18:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:32.629 21:18:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:32.629 21:18:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:32.629 21:18:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:32.629 21:18:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.629 21:18:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.629 21:18:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.629 21:18:22 -- paths/export.sh@5 -- # export PATH 00:02:32.629 21:18:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.629 21:18:22 -- nvmf/common.sh@47 -- # : 0 00:02:32.629 21:18:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:32.629 21:18:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:32.629 21:18:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:32.629 21:18:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:32.629 21:18:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:32.629 21:18:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:32.629 21:18:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:32.629 21:18:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:32.629 21:18:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:32.629 21:18:22 -- spdk/autotest.sh@32 -- # uname -s 00:02:32.629 21:18:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:32.629 21:18:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:32.629 21:18:22 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:32.629 21:18:22 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:32.629 21:18:22 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:32.629 21:18:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:32.629 21:18:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:32.629 21:18:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:32.629 21:18:22 -- spdk/autotest.sh@48 -- # udevadm_pid=1900104 00:02:32.629 21:18:22 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:32.629 21:18:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:32.629 21:18:22 -- pm/common@17 -- # local monitor 00:02:32.629 21:18:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.629 21:18:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.629 21:18:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.629 21:18:22 -- pm/common@21 -- # date +%s 00:02:32.629 21:18:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.629 21:18:22 -- pm/common@25 -- # sleep 1 00:02:32.629 21:18:22 -- pm/common@21 -- # date +%s 00:02:32.629 21:18:22 -- pm/common@21 -- # date +%s 00:02:32.629 21:18:22 -- pm/common@21 -- # date +%s 00:02:32.630 21:18:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721071102 00:02:32.630 21:18:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721071102 00:02:32.630 21:18:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721071102 00:02:32.630 21:18:22 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721071102 00:02:32.630 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721071102_collect-vmstat.pm.log 00:02:32.630 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721071102_collect-cpu-load.pm.log 00:02:32.630 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721071102_collect-cpu-temp.pm.log 00:02:32.630 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721071102_collect-bmc-pm.bmc.pm.log 00:02:33.569 21:18:23 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:33.569 21:18:23 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:33.570 21:18:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:33.570 21:18:23 -- common/autotest_common.sh@10 -- # set +x 00:02:33.570 21:18:23 -- spdk/autotest.sh@59 -- # create_test_list 00:02:33.570 21:18:23 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:33.570 21:18:23 -- common/autotest_common.sh@10 -- # set +x 00:02:33.570 21:18:23 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:33.570 21:18:23 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.570 21:18:23 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.570 21:18:23 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:33.570 21:18:23 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.570 21:18:23 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:33.570 21:18:23 -- common/autotest_common.sh@1455 -- # uname 00:02:33.570 21:18:23 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:33.570 21:18:23 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:33.570 21:18:23 -- common/autotest_common.sh@1475 -- # uname 00:02:33.570 21:18:23 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:33.570 21:18:23 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:33.570 21:18:23 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:33.570 21:18:23 -- spdk/autotest.sh@72 -- # hash lcov 00:02:33.570 21:18:23 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:33.570 21:18:23 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:33.570 --rc lcov_branch_coverage=1 00:02:33.570 --rc lcov_function_coverage=1 00:02:33.570 --rc genhtml_branch_coverage=1 00:02:33.570 --rc genhtml_function_coverage=1 00:02:33.570 --rc genhtml_legend=1 00:02:33.570 --rc geninfo_all_blocks=1 00:02:33.570 ' 00:02:33.570 21:18:23 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:33.570 --rc lcov_branch_coverage=1 00:02:33.570 --rc lcov_function_coverage=1 00:02:33.570 --rc genhtml_branch_coverage=1 00:02:33.570 --rc genhtml_function_coverage=1 00:02:33.570 --rc genhtml_legend=1 00:02:33.570 --rc geninfo_all_blocks=1 00:02:33.570 ' 00:02:33.570 21:18:23 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:33.570 --rc lcov_branch_coverage=1 00:02:33.570 --rc lcov_function_coverage=1 00:02:33.570 --rc genhtml_branch_coverage=1 00:02:33.570 --rc genhtml_function_coverage=1 00:02:33.570 --rc genhtml_legend=1 00:02:33.570 --rc geninfo_all_blocks=1 00:02:33.570 --no-external' 00:02:33.570 21:18:23 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:33.570 --rc lcov_branch_coverage=1 00:02:33.570 --rc lcov_function_coverage=1 00:02:33.570 --rc genhtml_branch_coverage=1 00:02:33.570 --rc genhtml_function_coverage=1 00:02:33.570 --rc genhtml_legend=1 00:02:33.570 --rc geninfo_all_blocks=1 00:02:33.570 --no-external' 00:02:33.570 21:18:23 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:33.829 lcov: LCOV version 1.14 00:02:33.829 21:18:23 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:46.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:46.094 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:58.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:58.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:58.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:58.320 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:58.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:58.320 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:58.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:58.320 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:58.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:58.320 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:58.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:58.320 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:58.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:58.320 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:58.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:58.320 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:58.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:58.320 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:58.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:58.320 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:58.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:58.320 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:58.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:58.320 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:58.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:58.581 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:58.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:58.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:58.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:58.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:58.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:58.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:58.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:58.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:58.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:58.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:58.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:58.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:58.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:58.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:58.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:58.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:58.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:58.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:58.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:58.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:58.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:58.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:58.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:58.843 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:03.051 21:18:52 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:03.051 21:18:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:03.051 21:18:52 -- common/autotest_common.sh@10 -- # set +x 00:03:03.051 21:18:52 -- spdk/autotest.sh@91 -- # rm -f 00:03:03.052 21:18:52 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.355 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:06.355 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:06.355 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:06.355 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:06.355 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:06.355 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:06.355 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:06.355 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:06.355 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:06.355 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:06.355 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:06.355 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:06.355 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:06.355 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:06.355 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:06.355 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:06.355 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:06.615 21:18:56 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:06.615 21:18:56 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:06.615 21:18:56 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:06.615 21:18:56 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:06.615 21:18:56 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:06.615 21:18:56 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:06.615 21:18:56 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:06.615 21:18:56 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:06.615 21:18:56 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:06.615 21:18:56 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:06.615 21:18:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:06.615 21:18:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:06.615 21:18:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:06.615 21:18:56 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:06.615 21:18:56 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:06.615 No valid GPT data, bailing 00:03:06.615 21:18:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:06.615 21:18:56 -- scripts/common.sh@391 -- # pt= 00:03:06.615 21:18:56 -- scripts/common.sh@392 -- # return 1 00:03:06.616 21:18:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:06.616 1+0 records in 00:03:06.616 1+0 records out 00:03:06.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00147106 s, 713 MB/s 00:03:06.616 21:18:56 -- spdk/autotest.sh@118 -- # sync 00:03:06.616 21:18:56 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:06.616 21:18:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:06.616 21:18:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:14.750 21:19:04 -- spdk/autotest.sh@124 -- # uname -s 00:03:14.750 21:19:04 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:14.750 21:19:04 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:14.750 21:19:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.750 21:19:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.750 21:19:04 -- common/autotest_common.sh@10 -- # set +x 00:03:14.750 ************************************ 00:03:14.750 START TEST setup.sh 00:03:14.750 ************************************ 00:03:14.750 21:19:04 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:14.750 * Looking for test storage... 00:03:14.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:14.750 21:19:04 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:14.750 21:19:04 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:14.750 21:19:04 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:14.750 21:19:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.750 21:19:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.750 21:19:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:14.750 ************************************ 00:03:14.750 START TEST acl 00:03:14.750 ************************************ 00:03:14.750 21:19:04 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:14.750 * Looking for test storage... 00:03:14.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:14.750 21:19:04 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:14.750 21:19:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:14.750 21:19:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:14.750 21:19:04 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:14.750 21:19:04 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:14.750 21:19:04 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:14.750 21:19:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:14.750 21:19:04 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.750 21:19:04 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:14.750 21:19:04 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:14.750 21:19:04 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:14.750 21:19:04 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:14.750 21:19:04 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:14.750 21:19:04 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:14.750 21:19:04 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:14.750 21:19:04 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.020 21:19:08 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:19.020 21:19:08 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:19.020 21:19:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.020 21:19:08 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:19.020 21:19:08 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.020 21:19:08 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:21.566 Hugepages 00:03:21.566 node hugesize free / total 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.566 00:03:21.566 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.566 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:21.828 21:19:11 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:21.828 21:19:11 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.828 21:19:11 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.828 21:19:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:21.828 ************************************ 00:03:21.828 START TEST denied 00:03:21.828 ************************************ 00:03:21.828 21:19:11 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:21.828 21:19:11 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:21.828 21:19:11 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:21.828 21:19:11 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:21.828 21:19:11 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.828 21:19:11 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:26.036 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:26.036 21:19:15 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:26.036 21:19:15 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:26.036 21:19:15 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:26.036 21:19:15 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:26.036 21:19:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:26.036 21:19:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:26.036 21:19:15 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:26.036 21:19:15 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:26.036 21:19:15 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.036 21:19:15 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.324 00:03:31.324 real 0m8.606s 00:03:31.324 user 0m2.867s 00:03:31.324 sys 0m5.027s 00:03:31.324 21:19:20 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.324 21:19:20 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:31.324 ************************************ 00:03:31.324 END TEST denied 00:03:31.324 ************************************ 00:03:31.324 21:19:20 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:31.324 21:19:20 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:31.324 21:19:20 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.324 21:19:20 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.324 21:19:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.324 ************************************ 00:03:31.324 START TEST allowed 00:03:31.324 ************************************ 00:03:31.324 21:19:20 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:31.324 21:19:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:31.324 21:19:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:31.324 21:19:20 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:31.324 21:19:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.324 21:19:20 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.615 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:36.615 21:19:25 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:36.615 21:19:25 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:36.615 21:19:25 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:36.615 21:19:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.615 21:19:25 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.924 00:03:39.924 real 0m9.405s 00:03:39.924 user 0m2.874s 00:03:39.924 sys 0m4.834s 00:03:39.924 21:19:29 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.924 21:19:29 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:39.924 ************************************ 00:03:39.924 END TEST allowed 00:03:39.924 ************************************ 00:03:39.924 21:19:29 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:39.924 00:03:39.924 real 0m25.405s 00:03:39.924 user 0m8.324s 00:03:39.924 sys 0m14.795s 00:03:39.924 21:19:29 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.924 21:19:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:39.924 ************************************ 00:03:39.924 END TEST acl 00:03:39.924 ************************************ 00:03:40.187 21:19:29 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:40.187 21:19:29 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:40.187 21:19:29 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.187 21:19:29 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.187 21:19:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:40.187 ************************************ 00:03:40.187 START TEST hugepages 00:03:40.187 ************************************ 00:03:40.187 21:19:29 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:40.187 * Looking for test storage... 00:03:40.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 102781476 kB' 'MemAvailable: 106270528 kB' 'Buffers: 2704 kB' 'Cached: 14460972 kB' 'SwapCached: 0 kB' 'Active: 11504712 kB' 'Inactive: 3523448 kB' 'Active(anon): 11030528 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567764 kB' 'Mapped: 164720 kB' 'Shmem: 10466044 kB' 'KReclaimable: 532364 kB' 'Slab: 1405532 kB' 'SReclaimable: 532364 kB' 'SUnreclaim: 873168 kB' 'KernelStack: 27392 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460892 kB' 'Committed_AS: 12645436 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.187 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.188 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:40.189 21:19:29 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:40.189 21:19:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.189 21:19:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.189 21:19:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.189 ************************************ 00:03:40.189 START TEST default_setup 00:03:40.189 ************************************ 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.189 21:19:29 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:43.507 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:43.507 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:43.507 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:43.507 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:43.507 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:43.507 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:43.507 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:43.767 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:43.767 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:43.767 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:43.767 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:43.767 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:43.767 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:43.767 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:43.767 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:43.767 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:43.767 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104967380 kB' 'MemAvailable: 108456368 kB' 'Buffers: 2704 kB' 'Cached: 14461084 kB' 'SwapCached: 0 kB' 'Active: 11523300 kB' 'Inactive: 3523448 kB' 'Active(anon): 11049116 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586336 kB' 'Mapped: 163908 kB' 'Shmem: 10466156 kB' 'KReclaimable: 532300 kB' 'Slab: 1403264 kB' 'SReclaimable: 532300 kB' 'SUnreclaim: 870964 kB' 'KernelStack: 27376 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12632668 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.031 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.032 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104967956 kB' 'MemAvailable: 108456944 kB' 'Buffers: 2704 kB' 'Cached: 14461088 kB' 'SwapCached: 0 kB' 'Active: 11523364 kB' 'Inactive: 3523448 kB' 'Active(anon): 11049180 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586348 kB' 'Mapped: 163856 kB' 'Shmem: 10466160 kB' 'KReclaimable: 532300 kB' 'Slab: 1403264 kB' 'SReclaimable: 532300 kB' 'SUnreclaim: 870964 kB' 'KernelStack: 27360 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12632688 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.033 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.034 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104967752 kB' 'MemAvailable: 108456740 kB' 'Buffers: 2704 kB' 'Cached: 14461104 kB' 'SwapCached: 0 kB' 'Active: 11523484 kB' 'Inactive: 3523448 kB' 'Active(anon): 11049300 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586528 kB' 'Mapped: 163856 kB' 'Shmem: 10466176 kB' 'KReclaimable: 532300 kB' 'Slab: 1403336 kB' 'SReclaimable: 532300 kB' 'SUnreclaim: 871036 kB' 'KernelStack: 27376 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12632708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.305 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.306 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.307 nr_hugepages=1024 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.307 resv_hugepages=0 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.307 surplus_hugepages=0 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.307 anon_hugepages=0 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104967936 kB' 'MemAvailable: 108456924 kB' 'Buffers: 2704 kB' 'Cached: 14461144 kB' 'SwapCached: 0 kB' 'Active: 11523172 kB' 'Inactive: 3523448 kB' 'Active(anon): 11048988 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586140 kB' 'Mapped: 163856 kB' 'Shmem: 10466216 kB' 'KReclaimable: 532300 kB' 'Slab: 1403336 kB' 'SReclaimable: 532300 kB' 'SUnreclaim: 871036 kB' 'KernelStack: 27360 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12632732 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235428 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.307 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.308 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52280408 kB' 'MemUsed: 13378600 kB' 'SwapCached: 0 kB' 'Active: 5134064 kB' 'Inactive: 3300284 kB' 'Active(anon): 4981504 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3300284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8144056 kB' 'Mapped: 82004 kB' 'AnonPages: 293496 kB' 'Shmem: 4691212 kB' 'KernelStack: 14712 kB' 'PageTables: 4812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 407960 kB' 'Slab: 934700 kB' 'SReclaimable: 407960 kB' 'SUnreclaim: 526740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.309 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.310 node0=1024 expecting 1024 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.310 00:03:44.310 real 0m3.977s 00:03:44.310 user 0m1.534s 00:03:44.310 sys 0m2.427s 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.310 21:19:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:44.310 ************************************ 00:03:44.310 END TEST default_setup 00:03:44.310 ************************************ 00:03:44.310 21:19:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:44.310 21:19:33 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:44.310 21:19:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.310 21:19:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.310 21:19:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.310 ************************************ 00:03:44.310 START TEST per_node_1G_alloc 00:03:44.310 ************************************ 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.310 21:19:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.677 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:47.677 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:47.677 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:47.677 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:47.677 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:47.677 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:47.677 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:47.677 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:47.677 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:47.677 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:47.677 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:47.677 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:47.677 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:47.677 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:47.677 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:47.677 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:47.677 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105022328 kB' 'MemAvailable: 108511316 kB' 'Buffers: 2704 kB' 'Cached: 14461244 kB' 'SwapCached: 0 kB' 'Active: 11523108 kB' 'Inactive: 3523448 kB' 'Active(anon): 11048924 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585496 kB' 'Mapped: 162960 kB' 'Shmem: 10466316 kB' 'KReclaimable: 532300 kB' 'Slab: 1403708 kB' 'SReclaimable: 532300 kB' 'SUnreclaim: 871408 kB' 'KernelStack: 27392 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12619760 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235652 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.942 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.943 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105022868 kB' 'MemAvailable: 108511856 kB' 'Buffers: 2704 kB' 'Cached: 14461248 kB' 'SwapCached: 0 kB' 'Active: 11522600 kB' 'Inactive: 3523448 kB' 'Active(anon): 11048416 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584944 kB' 'Mapped: 162960 kB' 'Shmem: 10466320 kB' 'KReclaimable: 532300 kB' 'Slab: 1403708 kB' 'SReclaimable: 532300 kB' 'SUnreclaim: 871408 kB' 'KernelStack: 27520 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12619912 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235652 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.944 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.945 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105022376 kB' 'MemAvailable: 108511364 kB' 'Buffers: 2704 kB' 'Cached: 14461268 kB' 'SwapCached: 0 kB' 'Active: 11521900 kB' 'Inactive: 3523448 kB' 'Active(anon): 11047716 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584736 kB' 'Mapped: 162884 kB' 'Shmem: 10466340 kB' 'KReclaimable: 532300 kB' 'Slab: 1403672 kB' 'SReclaimable: 532300 kB' 'SUnreclaim: 871372 kB' 'KernelStack: 27488 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12619940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.946 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.947 nr_hugepages=1024 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.947 resv_hugepages=0 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.947 surplus_hugepages=0 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.947 anon_hugepages=0 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.947 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105021556 kB' 'MemAvailable: 108510544 kB' 'Buffers: 2704 kB' 'Cached: 14461304 kB' 'SwapCached: 0 kB' 'Active: 11522596 kB' 'Inactive: 3523448 kB' 'Active(anon): 11048412 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585420 kB' 'Mapped: 162884 kB' 'Shmem: 10466376 kB' 'KReclaimable: 532300 kB' 'Slab: 1403672 kB' 'SReclaimable: 532300 kB' 'SUnreclaim: 871372 kB' 'KernelStack: 27504 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12620328 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235684 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.948 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.949 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.949 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.949 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.949 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.949 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.949 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.949 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.949 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.949 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.949 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.949 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.949 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.949 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.949 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:47.949 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.211 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53343692 kB' 'MemUsed: 12315316 kB' 'SwapCached: 0 kB' 'Active: 5134972 kB' 'Inactive: 3300284 kB' 'Active(anon): 4982412 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3300284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8144216 kB' 'Mapped: 81552 kB' 'AnonPages: 294384 kB' 'Shmem: 4691372 kB' 'KernelStack: 14744 kB' 'PageTables: 4944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 407960 kB' 'Slab: 934796 kB' 'SReclaimable: 407960 kB' 'SUnreclaim: 526836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.212 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.213 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51681828 kB' 'MemUsed: 8998044 kB' 'SwapCached: 0 kB' 'Active: 6387320 kB' 'Inactive: 223164 kB' 'Active(anon): 6065696 kB' 'Inactive(anon): 0 kB' 'Active(file): 321624 kB' 'Inactive(file): 223164 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6319816 kB' 'Mapped: 81332 kB' 'AnonPages: 290820 kB' 'Shmem: 5775028 kB' 'KernelStack: 12728 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124340 kB' 'Slab: 468780 kB' 'SReclaimable: 124340 kB' 'SUnreclaim: 344440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.214 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:48.215 node0=512 expecting 512 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:48.215 node1=512 expecting 512 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:48.215 00:03:48.215 real 0m3.796s 00:03:48.215 user 0m1.510s 00:03:48.215 sys 0m2.339s 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.215 21:19:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.215 ************************************ 00:03:48.215 END TEST per_node_1G_alloc 00:03:48.215 ************************************ 00:03:48.215 21:19:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:48.215 21:19:37 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:48.215 21:19:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.215 21:19:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.215 21:19:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.215 ************************************ 00:03:48.215 START TEST even_2G_alloc 00:03:48.215 ************************************ 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.215 21:19:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.513 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.513 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.513 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.513 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.513 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.513 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.513 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.513 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.513 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.513 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:51.513 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.513 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.513 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.513 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.513 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.513 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.513 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105065832 kB' 'MemAvailable: 108554788 kB' 'Buffers: 2704 kB' 'Cached: 14461444 kB' 'SwapCached: 0 kB' 'Active: 11522548 kB' 'Inactive: 3523448 kB' 'Active(anon): 11048364 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584716 kB' 'Mapped: 163012 kB' 'Shmem: 10466516 kB' 'KReclaimable: 532268 kB' 'Slab: 1402280 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870012 kB' 'KernelStack: 27328 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12619396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.776 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.777 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105061736 kB' 'MemAvailable: 108550692 kB' 'Buffers: 2704 kB' 'Cached: 14461448 kB' 'SwapCached: 0 kB' 'Active: 11524212 kB' 'Inactive: 3523448 kB' 'Active(anon): 11050028 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586960 kB' 'Mapped: 163420 kB' 'Shmem: 10466520 kB' 'KReclaimable: 532268 kB' 'Slab: 1402244 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 869976 kB' 'KernelStack: 27312 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12622156 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.043 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105058808 kB' 'MemAvailable: 108547764 kB' 'Buffers: 2704 kB' 'Cached: 14461464 kB' 'SwapCached: 0 kB' 'Active: 11521272 kB' 'Inactive: 3523448 kB' 'Active(anon): 11047088 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584004 kB' 'Mapped: 163316 kB' 'Shmem: 10466536 kB' 'KReclaimable: 532268 kB' 'Slab: 1402244 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 869976 kB' 'KernelStack: 27312 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12618576 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.044 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.045 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.046 nr_hugepages=1024 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.046 resv_hugepages=0 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.046 surplus_hugepages=0 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.046 anon_hugepages=0 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105058808 kB' 'MemAvailable: 108547764 kB' 'Buffers: 2704 kB' 'Cached: 14461488 kB' 'SwapCached: 0 kB' 'Active: 11521112 kB' 'Inactive: 3523448 kB' 'Active(anon): 11046928 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583736 kB' 'Mapped: 162916 kB' 'Shmem: 10466560 kB' 'KReclaimable: 532268 kB' 'Slab: 1402244 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 869976 kB' 'KernelStack: 27296 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12618600 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53362824 kB' 'MemUsed: 12296184 kB' 'SwapCached: 0 kB' 'Active: 5132340 kB' 'Inactive: 3300284 kB' 'Active(anon): 4979780 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3300284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8144380 kB' 'Mapped: 81584 kB' 'AnonPages: 291452 kB' 'Shmem: 4691536 kB' 'KernelStack: 14712 kB' 'PageTables: 4812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 407928 kB' 'Slab: 934320 kB' 'SReclaimable: 407928 kB' 'SUnreclaim: 526392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51695984 kB' 'MemUsed: 8983888 kB' 'SwapCached: 0 kB' 'Active: 6388580 kB' 'Inactive: 223164 kB' 'Active(anon): 6066956 kB' 'Inactive(anon): 0 kB' 'Active(file): 321624 kB' 'Inactive(file): 223164 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6319852 kB' 'Mapped: 81332 kB' 'AnonPages: 292056 kB' 'Shmem: 5775064 kB' 'KernelStack: 12584 kB' 'PageTables: 3472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124340 kB' 'Slab: 467924 kB' 'SReclaimable: 124340 kB' 'SUnreclaim: 343584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.050 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.051 node0=512 expecting 512 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:52.051 node1=512 expecting 512 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:52.051 00:03:52.051 real 0m3.831s 00:03:52.051 user 0m1.508s 00:03:52.051 sys 0m2.381s 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.051 21:19:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.051 ************************************ 00:03:52.051 END TEST even_2G_alloc 00:03:52.051 ************************************ 00:03:52.051 21:19:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:52.051 21:19:41 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:52.051 21:19:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.051 21:19:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.051 21:19:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.051 ************************************ 00:03:52.051 START TEST odd_alloc 00:03:52.051 ************************************ 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.051 21:19:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.352 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.352 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.352 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.352 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.352 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.352 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.352 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.352 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.352 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.352 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:55.352 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.352 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.352 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.352 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.352 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.352 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.352 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105037552 kB' 'MemAvailable: 108526508 kB' 'Buffers: 2704 kB' 'Cached: 14461604 kB' 'SwapCached: 0 kB' 'Active: 11522244 kB' 'Inactive: 3523448 kB' 'Active(anon): 11048060 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584540 kB' 'Mapped: 162952 kB' 'Shmem: 10466676 kB' 'KReclaimable: 532268 kB' 'Slab: 1403000 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870732 kB' 'KernelStack: 27472 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12620364 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235892 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.352 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.353 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105041412 kB' 'MemAvailable: 108530368 kB' 'Buffers: 2704 kB' 'Cached: 14461628 kB' 'SwapCached: 0 kB' 'Active: 11522444 kB' 'Inactive: 3523448 kB' 'Active(anon): 11048260 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584824 kB' 'Mapped: 162940 kB' 'Shmem: 10466700 kB' 'KReclaimable: 532268 kB' 'Slab: 1402992 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870724 kB' 'KernelStack: 27248 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12619460 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235620 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.354 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105042440 kB' 'MemAvailable: 108531396 kB' 'Buffers: 2704 kB' 'Cached: 14461644 kB' 'SwapCached: 0 kB' 'Active: 11522092 kB' 'Inactive: 3523448 kB' 'Active(anon): 11047908 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584512 kB' 'Mapped: 162936 kB' 'Shmem: 10466716 kB' 'KReclaimable: 532268 kB' 'Slab: 1402608 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870340 kB' 'KernelStack: 27280 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12619528 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:55.357 nr_hugepages=1025 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.357 resv_hugepages=0 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.357 surplus_hugepages=0 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.357 anon_hugepages=0 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105042692 kB' 'MemAvailable: 108531648 kB' 'Buffers: 2704 kB' 'Cached: 14461664 kB' 'SwapCached: 0 kB' 'Active: 11522076 kB' 'Inactive: 3523448 kB' 'Active(anon): 11047892 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584456 kB' 'Mapped: 162936 kB' 'Shmem: 10466736 kB' 'KReclaimable: 532268 kB' 'Slab: 1402644 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870376 kB' 'KernelStack: 27264 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12619548 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53357428 kB' 'MemUsed: 12301580 kB' 'SwapCached: 0 kB' 'Active: 5132788 kB' 'Inactive: 3300284 kB' 'Active(anon): 4980228 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3300284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8144504 kB' 'Mapped: 81604 kB' 'AnonPages: 291744 kB' 'Shmem: 4691660 kB' 'KernelStack: 14728 kB' 'PageTables: 4856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 407928 kB' 'Slab: 934824 kB' 'SReclaimable: 407928 kB' 'SUnreclaim: 526896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.359 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51685552 kB' 'MemUsed: 8994320 kB' 'SwapCached: 0 kB' 'Active: 6389396 kB' 'Inactive: 223164 kB' 'Active(anon): 6067772 kB' 'Inactive(anon): 0 kB' 'Active(file): 321624 kB' 'Inactive(file): 223164 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6319888 kB' 'Mapped: 81332 kB' 'AnonPages: 292832 kB' 'Shmem: 5775100 kB' 'KernelStack: 12584 kB' 'PageTables: 3464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124340 kB' 'Slab: 467820 kB' 'SReclaimable: 124340 kB' 'SUnreclaim: 343480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.360 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:55.362 node0=512 expecting 513 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:55.362 node1=513 expecting 512 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:55.362 00:03:55.362 real 0m3.301s 00:03:55.362 user 0m1.166s 00:03:55.362 sys 0m2.090s 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.362 21:19:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.362 ************************************ 00:03:55.362 END TEST odd_alloc 00:03:55.362 ************************************ 00:03:55.362 21:19:45 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:55.362 21:19:45 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:55.362 21:19:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.362 21:19:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.362 21:19:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.623 ************************************ 00:03:55.623 START TEST custom_alloc 00:03:55.623 ************************************ 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:55.623 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:55.624 21:19:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:55.624 21:19:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.624 21:19:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.919 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:58.919 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:58.919 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:58.919 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:58.919 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:58.919 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:58.919 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:58.919 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:58.919 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:58.919 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:58.919 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:58.919 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:58.920 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:58.920 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:58.920 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:58.920 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:58.920 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103976944 kB' 'MemAvailable: 107465900 kB' 'Buffers: 2704 kB' 'Cached: 14461796 kB' 'SwapCached: 0 kB' 'Active: 11523120 kB' 'Inactive: 3523448 kB' 'Active(anon): 11048936 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585380 kB' 'Mapped: 163012 kB' 'Shmem: 10466868 kB' 'KReclaimable: 532268 kB' 'Slab: 1402864 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870596 kB' 'KernelStack: 27264 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12620312 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.187 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103982580 kB' 'MemAvailable: 107471536 kB' 'Buffers: 2704 kB' 'Cached: 14461796 kB' 'SwapCached: 0 kB' 'Active: 11523232 kB' 'Inactive: 3523448 kB' 'Active(anon): 11049048 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585492 kB' 'Mapped: 163012 kB' 'Shmem: 10466868 kB' 'KReclaimable: 532268 kB' 'Slab: 1402856 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870588 kB' 'KernelStack: 27216 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12620332 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.188 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.189 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103982420 kB' 'MemAvailable: 107471376 kB' 'Buffers: 2704 kB' 'Cached: 14461816 kB' 'SwapCached: 0 kB' 'Active: 11522828 kB' 'Inactive: 3523448 kB' 'Active(anon): 11048644 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585032 kB' 'Mapped: 162956 kB' 'Shmem: 10466888 kB' 'KReclaimable: 532268 kB' 'Slab: 1402872 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870604 kB' 'KernelStack: 27248 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12620352 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.190 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.191 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:59.192 nr_hugepages=1536 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.192 resv_hugepages=0 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.192 surplus_hugepages=0 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.192 anon_hugepages=0 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103981664 kB' 'MemAvailable: 107470620 kB' 'Buffers: 2704 kB' 'Cached: 14461840 kB' 'SwapCached: 0 kB' 'Active: 11522812 kB' 'Inactive: 3523448 kB' 'Active(anon): 11048628 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584996 kB' 'Mapped: 162956 kB' 'Shmem: 10466912 kB' 'KReclaimable: 532268 kB' 'Slab: 1402872 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870604 kB' 'KernelStack: 27232 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12620376 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.192 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.193 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53350312 kB' 'MemUsed: 12308696 kB' 'SwapCached: 0 kB' 'Active: 5134724 kB' 'Inactive: 3300284 kB' 'Active(anon): 4982164 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3300284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8144656 kB' 'Mapped: 81624 kB' 'AnonPages: 293500 kB' 'Shmem: 4691812 kB' 'KernelStack: 14712 kB' 'PageTables: 4812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 407928 kB' 'Slab: 934740 kB' 'SReclaimable: 407928 kB' 'SUnreclaim: 526812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.194 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 50631352 kB' 'MemUsed: 10048520 kB' 'SwapCached: 0 kB' 'Active: 6388236 kB' 'Inactive: 223164 kB' 'Active(anon): 6066612 kB' 'Inactive(anon): 0 kB' 'Active(file): 321624 kB' 'Inactive(file): 223164 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6319904 kB' 'Mapped: 81332 kB' 'AnonPages: 291664 kB' 'Shmem: 5775116 kB' 'KernelStack: 12568 kB' 'PageTables: 3476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124340 kB' 'Slab: 468132 kB' 'SReclaimable: 124340 kB' 'SUnreclaim: 343792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.195 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:59.196 node0=512 expecting 512 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.196 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.197 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:59.197 node1=1024 expecting 1024 00:03:59.197 21:19:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:59.197 00:03:59.197 real 0m3.797s 00:03:59.197 user 0m1.447s 00:03:59.197 sys 0m2.410s 00:03:59.197 21:19:48 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.197 21:19:48 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.197 ************************************ 00:03:59.197 END TEST custom_alloc 00:03:59.197 ************************************ 00:03:59.458 21:19:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:59.458 21:19:49 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:59.458 21:19:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.458 21:19:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.458 21:19:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.458 ************************************ 00:03:59.458 START TEST no_shrink_alloc 00:03:59.458 ************************************ 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.458 21:19:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.765 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.765 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.765 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.765 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.765 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.765 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.765 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.765 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.765 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.766 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:02.766 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.766 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.766 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.766 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.766 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.766 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.766 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105001632 kB' 'MemAvailable: 108490588 kB' 'Buffers: 2704 kB' 'Cached: 14461972 kB' 'SwapCached: 0 kB' 'Active: 11524188 kB' 'Inactive: 3523448 kB' 'Active(anon): 11050004 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586336 kB' 'Mapped: 163052 kB' 'Shmem: 10467044 kB' 'KReclaimable: 532268 kB' 'Slab: 1402612 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870344 kB' 'KernelStack: 27296 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12621176 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.766 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105006000 kB' 'MemAvailable: 108494956 kB' 'Buffers: 2704 kB' 'Cached: 14461976 kB' 'SwapCached: 0 kB' 'Active: 11523852 kB' 'Inactive: 3523448 kB' 'Active(anon): 11049668 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585984 kB' 'Mapped: 163000 kB' 'Shmem: 10467048 kB' 'KReclaimable: 532268 kB' 'Slab: 1402588 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870320 kB' 'KernelStack: 27280 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12621196 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.767 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.768 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105006372 kB' 'MemAvailable: 108495328 kB' 'Buffers: 2704 kB' 'Cached: 14461992 kB' 'SwapCached: 0 kB' 'Active: 11523796 kB' 'Inactive: 3523448 kB' 'Active(anon): 11049612 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585888 kB' 'Mapped: 163000 kB' 'Shmem: 10467064 kB' 'KReclaimable: 532268 kB' 'Slab: 1402588 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870320 kB' 'KernelStack: 27296 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12621216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.769 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.770 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.771 nr_hugepages=1024 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.771 resv_hugepages=0 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.771 surplus_hugepages=0 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.771 anon_hugepages=0 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105005492 kB' 'MemAvailable: 108494448 kB' 'Buffers: 2704 kB' 'Cached: 14462016 kB' 'SwapCached: 0 kB' 'Active: 11523844 kB' 'Inactive: 3523448 kB' 'Active(anon): 11049660 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585908 kB' 'Mapped: 163000 kB' 'Shmem: 10467088 kB' 'KReclaimable: 532268 kB' 'Slab: 1402644 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870376 kB' 'KernelStack: 27312 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12621240 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.771 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.772 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52335580 kB' 'MemUsed: 13323428 kB' 'SwapCached: 0 kB' 'Active: 5133460 kB' 'Inactive: 3300284 kB' 'Active(anon): 4980900 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3300284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8144788 kB' 'Mapped: 81668 kB' 'AnonPages: 292140 kB' 'Shmem: 4691944 kB' 'KernelStack: 14728 kB' 'PageTables: 4860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 407928 kB' 'Slab: 934592 kB' 'SReclaimable: 407928 kB' 'SUnreclaim: 526664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.773 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.774 node0=1024 expecting 1024 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.774 21:19:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.082 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.082 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.082 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.082 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.082 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.082 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.082 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.082 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.082 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.082 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:06.082 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.082 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.082 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.082 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.082 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.082 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.082 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.344 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105034580 kB' 'MemAvailable: 108523536 kB' 'Buffers: 2704 kB' 'Cached: 14462124 kB' 'SwapCached: 0 kB' 'Active: 11525028 kB' 'Inactive: 3523448 kB' 'Active(anon): 11050844 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587004 kB' 'Mapped: 163012 kB' 'Shmem: 10467196 kB' 'KReclaimable: 532268 kB' 'Slab: 1402564 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870296 kB' 'KernelStack: 27312 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12622224 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.344 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.612 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105034580 kB' 'MemAvailable: 108523536 kB' 'Buffers: 2704 kB' 'Cached: 14462128 kB' 'SwapCached: 0 kB' 'Active: 11525092 kB' 'Inactive: 3523448 kB' 'Active(anon): 11050908 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587016 kB' 'Mapped: 163012 kB' 'Shmem: 10467200 kB' 'KReclaimable: 532268 kB' 'Slab: 1402560 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870292 kB' 'KernelStack: 27376 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12625404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.613 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.614 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105034436 kB' 'MemAvailable: 108523392 kB' 'Buffers: 2704 kB' 'Cached: 14462148 kB' 'SwapCached: 0 kB' 'Active: 11524884 kB' 'Inactive: 3523448 kB' 'Active(anon): 11050700 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586828 kB' 'Mapped: 163020 kB' 'Shmem: 10467220 kB' 'KReclaimable: 532268 kB' 'Slab: 1402560 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870292 kB' 'KernelStack: 27312 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12623628 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.615 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.616 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.617 nr_hugepages=1024 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.617 resv_hugepages=0 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.617 surplus_hugepages=0 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.617 anon_hugepages=0 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105034920 kB' 'MemAvailable: 108523876 kB' 'Buffers: 2704 kB' 'Cached: 14462168 kB' 'SwapCached: 0 kB' 'Active: 11524720 kB' 'Inactive: 3523448 kB' 'Active(anon): 11050536 kB' 'Inactive(anon): 0 kB' 'Active(file): 474184 kB' 'Inactive(file): 3523448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586584 kB' 'Mapped: 163012 kB' 'Shmem: 10467240 kB' 'KReclaimable: 532268 kB' 'Slab: 1402560 kB' 'SReclaimable: 532268 kB' 'SUnreclaim: 870292 kB' 'KernelStack: 27248 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12623648 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 140544 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4474228 kB' 'DirectMap2M: 32954368 kB' 'DirectMap1G: 98566144 kB' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.617 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.618 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52333088 kB' 'MemUsed: 13325920 kB' 'SwapCached: 0 kB' 'Active: 5135680 kB' 'Inactive: 3300284 kB' 'Active(anon): 4983120 kB' 'Inactive(anon): 0 kB' 'Active(file): 152560 kB' 'Inactive(file): 3300284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8144876 kB' 'Mapped: 81680 kB' 'AnonPages: 294308 kB' 'Shmem: 4692032 kB' 'KernelStack: 14808 kB' 'PageTables: 4716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 407928 kB' 'Slab: 934432 kB' 'SReclaimable: 407928 kB' 'SUnreclaim: 526504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.619 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.620 node0=1024 expecting 1024 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.620 00:04:06.620 real 0m7.239s 00:04:06.620 user 0m2.880s 00:04:06.620 sys 0m4.408s 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.620 21:19:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:06.620 ************************************ 00:04:06.620 END TEST no_shrink_alloc 00:04:06.620 ************************************ 00:04:06.620 21:19:56 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:06.620 21:19:56 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:06.620 21:19:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:06.621 21:19:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.621 21:19:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.621 21:19:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.621 21:19:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.621 21:19:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.621 21:19:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.621 21:19:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.621 21:19:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.621 21:19:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.621 21:19:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.621 21:19:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:06.621 21:19:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:06.621 00:04:06.621 real 0m26.569s 00:04:06.621 user 0m10.299s 00:04:06.621 sys 0m16.465s 00:04:06.621 21:19:56 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.621 21:19:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.621 ************************************ 00:04:06.621 END TEST hugepages 00:04:06.621 ************************************ 00:04:06.621 21:19:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:06.621 21:19:56 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:06.621 21:19:56 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.621 21:19:56 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.621 21:19:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.883 ************************************ 00:04:06.883 START TEST driver 00:04:06.883 ************************************ 00:04:06.883 21:19:56 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:06.883 * Looking for test storage... 00:04:06.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:06.883 21:19:56 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:06.883 21:19:56 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.883 21:19:56 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.175 21:20:01 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:12.175 21:20:01 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.175 21:20:01 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.175 21:20:01 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:12.175 ************************************ 00:04:12.175 START TEST guess_driver 00:04:12.175 ************************************ 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:12.175 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:12.175 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:12.175 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:12.175 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:12.175 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:12.175 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:12.175 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:12.175 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:12.175 Looking for driver=vfio-pci 00:04:12.176 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.176 21:20:01 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:12.176 21:20:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.176 21:20:01 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.477 21:20:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.738 21:20:05 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:15.738 21:20:05 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:15.738 21:20:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.738 21:20:05 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.033 00:04:21.033 real 0m8.679s 00:04:21.033 user 0m2.893s 00:04:21.033 sys 0m5.012s 00:04:21.033 21:20:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.033 21:20:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:21.033 ************************************ 00:04:21.033 END TEST guess_driver 00:04:21.033 ************************************ 00:04:21.033 21:20:10 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:21.033 00:04:21.033 real 0m13.711s 00:04:21.033 user 0m4.416s 00:04:21.033 sys 0m7.732s 00:04:21.033 21:20:10 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.033 21:20:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:21.033 ************************************ 00:04:21.033 END TEST driver 00:04:21.033 ************************************ 00:04:21.033 21:20:10 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:21.033 21:20:10 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:21.033 21:20:10 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.033 21:20:10 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.033 21:20:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:21.033 ************************************ 00:04:21.033 START TEST devices 00:04:21.033 ************************************ 00:04:21.033 21:20:10 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:21.033 * Looking for test storage... 00:04:21.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:21.033 21:20:10 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:21.033 21:20:10 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:21.033 21:20:10 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.033 21:20:10 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:25.243 21:20:14 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:25.243 21:20:14 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:25.243 21:20:14 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:25.243 21:20:14 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:25.243 21:20:14 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:25.243 21:20:14 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:25.243 21:20:14 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:25.243 21:20:14 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:25.243 21:20:14 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:25.243 21:20:14 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:25.243 No valid GPT data, bailing 00:04:25.243 21:20:14 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:25.243 21:20:14 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:25.243 21:20:14 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:25.243 21:20:14 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:25.243 21:20:14 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:25.243 21:20:14 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:25.243 21:20:14 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:25.243 21:20:14 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.243 21:20:14 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.243 21:20:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:25.243 ************************************ 00:04:25.243 START TEST nvme_mount 00:04:25.243 ************************************ 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:25.243 21:20:14 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:25.505 Creating new GPT entries in memory. 00:04:25.505 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:25.505 other utilities. 00:04:25.505 21:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:25.505 21:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.505 21:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.505 21:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.505 21:20:15 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:26.891 Creating new GPT entries in memory. 00:04:26.891 The operation has completed successfully. 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1940097 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.891 21:20:16 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.193 21:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.453 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.453 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:30.453 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.453 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.453 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.453 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:30.453 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.453 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.453 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.453 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:30.453 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:30.453 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:30.453 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:30.714 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:30.714 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:30.714 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:30.714 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.714 21:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.053 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.312 21:20:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.605 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.865 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.865 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:37.865 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:37.865 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:37.865 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.865 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.865 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.865 21:20:27 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.865 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.865 00:04:37.865 real 0m13.285s 00:04:37.865 user 0m4.061s 00:04:37.865 sys 0m7.045s 00:04:37.865 21:20:27 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.865 21:20:27 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:37.865 ************************************ 00:04:37.865 END TEST nvme_mount 00:04:37.865 ************************************ 00:04:37.865 21:20:27 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:37.865 21:20:27 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:37.865 21:20:27 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.865 21:20:27 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.865 21:20:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:37.865 ************************************ 00:04:37.866 START TEST dm_mount 00:04:37.866 ************************************ 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:37.866 21:20:27 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:39.248 Creating new GPT entries in memory. 00:04:39.248 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:39.248 other utilities. 00:04:39.248 21:20:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:39.248 21:20:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.248 21:20:28 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.248 21:20:28 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.248 21:20:28 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:40.186 Creating new GPT entries in memory. 00:04:40.187 The operation has completed successfully. 00:04:40.187 21:20:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:40.187 21:20:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.187 21:20:29 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.187 21:20:29 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.187 21:20:29 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:41.128 The operation has completed successfully. 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1945168 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.128 21:20:30 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.417 21:20:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.417 21:20:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.417 21:20:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.417 21:20:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.417 21:20:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.417 21:20:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.417 21:20:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.417 21:20:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.417 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.679 21:20:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.981 21:20:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.241 21:20:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.241 21:20:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:48.241 21:20:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:48.241 21:20:38 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:48.241 21:20:38 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.241 21:20:38 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:48.241 21:20:38 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:48.502 21:20:38 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.502 21:20:38 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:48.502 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:48.502 21:20:38 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:48.502 21:20:38 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:48.502 00:04:48.502 real 0m10.420s 00:04:48.502 user 0m2.786s 00:04:48.502 sys 0m4.657s 00:04:48.502 21:20:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.502 21:20:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:48.502 ************************************ 00:04:48.502 END TEST dm_mount 00:04:48.502 ************************************ 00:04:48.502 21:20:38 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:48.502 21:20:38 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:48.502 21:20:38 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:48.502 21:20:38 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.502 21:20:38 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.502 21:20:38 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:48.502 21:20:38 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.502 21:20:38 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:48.762 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:48.762 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:48.762 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:48.762 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:48.762 21:20:38 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:48.763 21:20:38 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.763 21:20:38 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:48.763 21:20:38 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.763 21:20:38 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:48.763 21:20:38 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.763 21:20:38 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:48.763 00:04:48.763 real 0m28.175s 00:04:48.763 user 0m8.457s 00:04:48.763 sys 0m14.428s 00:04:48.763 21:20:38 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.763 21:20:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:48.763 ************************************ 00:04:48.763 END TEST devices 00:04:48.763 ************************************ 00:04:48.763 21:20:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:48.763 00:04:48.763 real 1m34.265s 00:04:48.763 user 0m31.639s 00:04:48.763 sys 0m53.702s 00:04:48.763 21:20:38 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.763 21:20:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:48.763 ************************************ 00:04:48.763 END TEST setup.sh 00:04:48.763 ************************************ 00:04:48.763 21:20:38 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.763 21:20:38 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:52.056 Hugepages 00:04:52.056 node hugesize free / total 00:04:52.056 node0 1048576kB 0 / 0 00:04:52.056 node0 2048kB 2048 / 2048 00:04:52.056 node1 1048576kB 0 / 0 00:04:52.056 node1 2048kB 0 / 0 00:04:52.056 00:04:52.056 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:52.056 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:52.056 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:52.056 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:52.056 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:52.056 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:52.056 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:52.056 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:52.056 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:52.056 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:52.056 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:52.056 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:52.056 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:52.056 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:52.315 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:52.315 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:52.315 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:52.315 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:52.315 21:20:41 -- spdk/autotest.sh@130 -- # uname -s 00:04:52.315 21:20:41 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:52.315 21:20:41 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:52.315 21:20:41 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.636 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:55.636 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:55.636 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:55.636 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:55.636 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:55.636 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:55.636 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:55.636 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:55.636 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:55.636 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:55.636 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:55.636 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:55.636 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:55.636 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:55.636 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:55.636 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:57.540 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:57.798 21:20:47 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:58.732 21:20:48 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:58.732 21:20:48 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:58.732 21:20:48 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:58.732 21:20:48 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:58.732 21:20:48 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:58.732 21:20:48 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:58.732 21:20:48 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.732 21:20:48 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.732 21:20:48 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:58.991 21:20:48 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:58.991 21:20:48 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:58.991 21:20:48 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.305 Waiting for block devices as requested 00:05:02.305 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:02.305 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:02.305 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:02.305 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:02.305 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:02.565 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:02.565 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:02.565 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:02.826 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:02.826 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:02.826 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:03.086 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:03.086 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:03.086 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:03.345 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:03.345 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:03.345 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:03.611 21:20:53 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:03.611 21:20:53 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:03.611 21:20:53 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:03.611 21:20:53 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:05:03.611 21:20:53 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:03.611 21:20:53 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:03.611 21:20:53 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:03.611 21:20:53 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:03.611 21:20:53 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:03.611 21:20:53 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:03.611 21:20:53 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:03.611 21:20:53 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:03.611 21:20:53 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:03.611 21:20:53 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:05:03.611 21:20:53 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:03.611 21:20:53 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:03.611 21:20:53 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:03.611 21:20:53 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:03.611 21:20:53 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:03.612 21:20:53 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:03.612 21:20:53 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:03.612 21:20:53 -- common/autotest_common.sh@1557 -- # continue 00:05:03.612 21:20:53 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:03.612 21:20:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.612 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:05:03.612 21:20:53 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:03.612 21:20:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.612 21:20:53 -- common/autotest_common.sh@10 -- # set +x 00:05:03.612 21:20:53 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:06.909 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:06.909 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:06.909 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:07.169 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:07.169 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:07.169 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:07.169 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:07.169 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:07.169 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:07.169 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:07.169 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:07.169 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:07.169 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:07.169 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:07.169 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:07.169 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:07.169 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:07.429 21:20:57 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:07.429 21:20:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.429 21:20:57 -- common/autotest_common.sh@10 -- # set +x 00:05:07.692 21:20:57 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:07.692 21:20:57 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:07.692 21:20:57 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:07.692 21:20:57 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:07.692 21:20:57 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:07.692 21:20:57 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:07.692 21:20:57 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:07.692 21:20:57 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:07.692 21:20:57 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.692 21:20:57 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:07.692 21:20:57 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:07.692 21:20:57 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:07.692 21:20:57 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:07.692 21:20:57 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:07.692 21:20:57 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:07.692 21:20:57 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:07.692 21:20:57 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:07.692 21:20:57 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:07.692 21:20:57 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:07.692 21:20:57 -- common/autotest_common.sh@1593 -- # return 0 00:05:07.692 21:20:57 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:07.692 21:20:57 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:07.692 21:20:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:07.692 21:20:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:07.692 21:20:57 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:07.692 21:20:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:07.692 21:20:57 -- common/autotest_common.sh@10 -- # set +x 00:05:07.692 21:20:57 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:07.692 21:20:57 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:07.692 21:20:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.692 21:20:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.692 21:20:57 -- common/autotest_common.sh@10 -- # set +x 00:05:07.692 ************************************ 00:05:07.692 START TEST env 00:05:07.692 ************************************ 00:05:07.692 21:20:57 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:07.953 * Looking for test storage... 00:05:07.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:07.953 21:20:57 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:07.953 21:20:57 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.953 21:20:57 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.953 21:20:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.953 ************************************ 00:05:07.953 START TEST env_memory 00:05:07.953 ************************************ 00:05:07.953 21:20:57 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:07.953 00:05:07.953 00:05:07.953 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.953 http://cunit.sourceforge.net/ 00:05:07.953 00:05:07.953 00:05:07.953 Suite: memory 00:05:07.953 Test: alloc and free memory map ...[2024-07-15 21:20:57.595913] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:07.953 passed 00:05:07.953 Test: mem map translation ...[2024-07-15 21:20:57.621556] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:07.953 [2024-07-15 21:20:57.621585] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:07.953 [2024-07-15 21:20:57.621632] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:07.953 [2024-07-15 21:20:57.621639] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:07.953 passed 00:05:07.953 Test: mem map registration ...[2024-07-15 21:20:57.677025] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:07.953 [2024-07-15 21:20:57.677046] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:07.953 passed 00:05:07.953 Test: mem map adjacent registrations ...passed 00:05:07.953 00:05:07.953 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.953 suites 1 1 n/a 0 0 00:05:07.953 tests 4 4 4 0 0 00:05:07.953 asserts 152 152 152 0 n/a 00:05:07.953 00:05:07.953 Elapsed time = 0.193 seconds 00:05:07.953 00:05:07.953 real 0m0.208s 00:05:07.953 user 0m0.198s 00:05:07.953 sys 0m0.009s 00:05:07.953 21:20:57 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.953 21:20:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:07.953 ************************************ 00:05:07.953 END TEST env_memory 00:05:07.953 ************************************ 00:05:08.213 21:20:57 env -- common/autotest_common.sh@1142 -- # return 0 00:05:08.213 21:20:57 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:08.213 21:20:57 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.213 21:20:57 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.213 21:20:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.213 ************************************ 00:05:08.213 START TEST env_vtophys 00:05:08.213 ************************************ 00:05:08.213 21:20:57 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:08.213 EAL: lib.eal log level changed from notice to debug 00:05:08.213 EAL: Detected lcore 0 as core 0 on socket 0 00:05:08.213 EAL: Detected lcore 1 as core 1 on socket 0 00:05:08.213 EAL: Detected lcore 2 as core 2 on socket 0 00:05:08.213 EAL: Detected lcore 3 as core 3 on socket 0 00:05:08.213 EAL: Detected lcore 4 as core 4 on socket 0 00:05:08.213 EAL: Detected lcore 5 as core 5 on socket 0 00:05:08.213 EAL: Detected lcore 6 as core 6 on socket 0 00:05:08.213 EAL: Detected lcore 7 as core 7 on socket 0 00:05:08.213 EAL: Detected lcore 8 as core 8 on socket 0 00:05:08.213 EAL: Detected lcore 9 as core 9 on socket 0 00:05:08.213 EAL: Detected lcore 10 as core 10 on socket 0 00:05:08.213 EAL: Detected lcore 11 as core 11 on socket 0 00:05:08.213 EAL: Detected lcore 12 as core 12 on socket 0 00:05:08.213 EAL: Detected lcore 13 as core 13 on socket 0 00:05:08.213 EAL: Detected lcore 14 as core 14 on socket 0 00:05:08.213 EAL: Detected lcore 15 as core 15 on socket 0 00:05:08.213 EAL: Detected lcore 16 as core 16 on socket 0 00:05:08.213 EAL: Detected lcore 17 as core 17 on socket 0 00:05:08.213 EAL: Detected lcore 18 as core 18 on socket 0 00:05:08.213 EAL: Detected lcore 19 as core 19 on socket 0 00:05:08.213 EAL: Detected lcore 20 as core 20 on socket 0 00:05:08.213 EAL: Detected lcore 21 as core 21 on socket 0 00:05:08.213 EAL: Detected lcore 22 as core 22 on socket 0 00:05:08.213 EAL: Detected lcore 23 as core 23 on socket 0 00:05:08.213 EAL: Detected lcore 24 as core 24 on socket 0 00:05:08.213 EAL: Detected lcore 25 as core 25 on socket 0 00:05:08.213 EAL: Detected lcore 26 as core 26 on socket 0 00:05:08.213 EAL: Detected lcore 27 as core 27 on socket 0 00:05:08.213 EAL: Detected lcore 28 as core 28 on socket 0 00:05:08.213 EAL: Detected lcore 29 as core 29 on socket 0 00:05:08.213 EAL: Detected lcore 30 as core 30 on socket 0 00:05:08.213 EAL: Detected lcore 31 as core 31 on socket 0 00:05:08.213 EAL: Detected lcore 32 as core 32 on socket 0 00:05:08.213 EAL: Detected lcore 33 as core 33 on socket 0 00:05:08.213 EAL: Detected lcore 34 as core 34 on socket 0 00:05:08.213 EAL: Detected lcore 35 as core 35 on socket 0 00:05:08.213 EAL: Detected lcore 36 as core 0 on socket 1 00:05:08.213 EAL: Detected lcore 37 as core 1 on socket 1 00:05:08.213 EAL: Detected lcore 38 as core 2 on socket 1 00:05:08.213 EAL: Detected lcore 39 as core 3 on socket 1 00:05:08.213 EAL: Detected lcore 40 as core 4 on socket 1 00:05:08.213 EAL: Detected lcore 41 as core 5 on socket 1 00:05:08.213 EAL: Detected lcore 42 as core 6 on socket 1 00:05:08.213 EAL: Detected lcore 43 as core 7 on socket 1 00:05:08.213 EAL: Detected lcore 44 as core 8 on socket 1 00:05:08.213 EAL: Detected lcore 45 as core 9 on socket 1 00:05:08.213 EAL: Detected lcore 46 as core 10 on socket 1 00:05:08.213 EAL: Detected lcore 47 as core 11 on socket 1 00:05:08.213 EAL: Detected lcore 48 as core 12 on socket 1 00:05:08.213 EAL: Detected lcore 49 as core 13 on socket 1 00:05:08.213 EAL: Detected lcore 50 as core 14 on socket 1 00:05:08.213 EAL: Detected lcore 51 as core 15 on socket 1 00:05:08.213 EAL: Detected lcore 52 as core 16 on socket 1 00:05:08.213 EAL: Detected lcore 53 as core 17 on socket 1 00:05:08.213 EAL: Detected lcore 54 as core 18 on socket 1 00:05:08.213 EAL: Detected lcore 55 as core 19 on socket 1 00:05:08.213 EAL: Detected lcore 56 as core 20 on socket 1 00:05:08.213 EAL: Detected lcore 57 as core 21 on socket 1 00:05:08.213 EAL: Detected lcore 58 as core 22 on socket 1 00:05:08.213 EAL: Detected lcore 59 as core 23 on socket 1 00:05:08.213 EAL: Detected lcore 60 as core 24 on socket 1 00:05:08.214 EAL: Detected lcore 61 as core 25 on socket 1 00:05:08.214 EAL: Detected lcore 62 as core 26 on socket 1 00:05:08.214 EAL: Detected lcore 63 as core 27 on socket 1 00:05:08.214 EAL: Detected lcore 64 as core 28 on socket 1 00:05:08.214 EAL: Detected lcore 65 as core 29 on socket 1 00:05:08.214 EAL: Detected lcore 66 as core 30 on socket 1 00:05:08.214 EAL: Detected lcore 67 as core 31 on socket 1 00:05:08.214 EAL: Detected lcore 68 as core 32 on socket 1 00:05:08.214 EAL: Detected lcore 69 as core 33 on socket 1 00:05:08.214 EAL: Detected lcore 70 as core 34 on socket 1 00:05:08.214 EAL: Detected lcore 71 as core 35 on socket 1 00:05:08.214 EAL: Detected lcore 72 as core 0 on socket 0 00:05:08.214 EAL: Detected lcore 73 as core 1 on socket 0 00:05:08.214 EAL: Detected lcore 74 as core 2 on socket 0 00:05:08.214 EAL: Detected lcore 75 as core 3 on socket 0 00:05:08.214 EAL: Detected lcore 76 as core 4 on socket 0 00:05:08.214 EAL: Detected lcore 77 as core 5 on socket 0 00:05:08.214 EAL: Detected lcore 78 as core 6 on socket 0 00:05:08.214 EAL: Detected lcore 79 as core 7 on socket 0 00:05:08.214 EAL: Detected lcore 80 as core 8 on socket 0 00:05:08.214 EAL: Detected lcore 81 as core 9 on socket 0 00:05:08.214 EAL: Detected lcore 82 as core 10 on socket 0 00:05:08.214 EAL: Detected lcore 83 as core 11 on socket 0 00:05:08.214 EAL: Detected lcore 84 as core 12 on socket 0 00:05:08.214 EAL: Detected lcore 85 as core 13 on socket 0 00:05:08.214 EAL: Detected lcore 86 as core 14 on socket 0 00:05:08.214 EAL: Detected lcore 87 as core 15 on socket 0 00:05:08.214 EAL: Detected lcore 88 as core 16 on socket 0 00:05:08.214 EAL: Detected lcore 89 as core 17 on socket 0 00:05:08.214 EAL: Detected lcore 90 as core 18 on socket 0 00:05:08.214 EAL: Detected lcore 91 as core 19 on socket 0 00:05:08.214 EAL: Detected lcore 92 as core 20 on socket 0 00:05:08.214 EAL: Detected lcore 93 as core 21 on socket 0 00:05:08.214 EAL: Detected lcore 94 as core 22 on socket 0 00:05:08.214 EAL: Detected lcore 95 as core 23 on socket 0 00:05:08.214 EAL: Detected lcore 96 as core 24 on socket 0 00:05:08.214 EAL: Detected lcore 97 as core 25 on socket 0 00:05:08.214 EAL: Detected lcore 98 as core 26 on socket 0 00:05:08.214 EAL: Detected lcore 99 as core 27 on socket 0 00:05:08.214 EAL: Detected lcore 100 as core 28 on socket 0 00:05:08.214 EAL: Detected lcore 101 as core 29 on socket 0 00:05:08.214 EAL: Detected lcore 102 as core 30 on socket 0 00:05:08.214 EAL: Detected lcore 103 as core 31 on socket 0 00:05:08.214 EAL: Detected lcore 104 as core 32 on socket 0 00:05:08.214 EAL: Detected lcore 105 as core 33 on socket 0 00:05:08.214 EAL: Detected lcore 106 as core 34 on socket 0 00:05:08.214 EAL: Detected lcore 107 as core 35 on socket 0 00:05:08.214 EAL: Detected lcore 108 as core 0 on socket 1 00:05:08.214 EAL: Detected lcore 109 as core 1 on socket 1 00:05:08.214 EAL: Detected lcore 110 as core 2 on socket 1 00:05:08.214 EAL: Detected lcore 111 as core 3 on socket 1 00:05:08.214 EAL: Detected lcore 112 as core 4 on socket 1 00:05:08.214 EAL: Detected lcore 113 as core 5 on socket 1 00:05:08.214 EAL: Detected lcore 114 as core 6 on socket 1 00:05:08.214 EAL: Detected lcore 115 as core 7 on socket 1 00:05:08.214 EAL: Detected lcore 116 as core 8 on socket 1 00:05:08.214 EAL: Detected lcore 117 as core 9 on socket 1 00:05:08.214 EAL: Detected lcore 118 as core 10 on socket 1 00:05:08.214 EAL: Detected lcore 119 as core 11 on socket 1 00:05:08.214 EAL: Detected lcore 120 as core 12 on socket 1 00:05:08.214 EAL: Detected lcore 121 as core 13 on socket 1 00:05:08.214 EAL: Detected lcore 122 as core 14 on socket 1 00:05:08.214 EAL: Detected lcore 123 as core 15 on socket 1 00:05:08.214 EAL: Detected lcore 124 as core 16 on socket 1 00:05:08.214 EAL: Detected lcore 125 as core 17 on socket 1 00:05:08.214 EAL: Detected lcore 126 as core 18 on socket 1 00:05:08.214 EAL: Detected lcore 127 as core 19 on socket 1 00:05:08.214 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:08.214 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:08.214 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:08.214 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:08.214 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:08.214 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:08.214 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:08.214 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:08.214 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:08.214 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:08.214 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:08.214 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:08.214 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:08.214 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:08.214 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:08.214 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:08.214 EAL: Maximum logical cores by configuration: 128 00:05:08.214 EAL: Detected CPU lcores: 128 00:05:08.214 EAL: Detected NUMA nodes: 2 00:05:08.214 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:08.214 EAL: Detected shared linkage of DPDK 00:05:08.214 EAL: No shared files mode enabled, IPC will be disabled 00:05:08.214 EAL: Bus pci wants IOVA as 'DC' 00:05:08.214 EAL: Buses did not request a specific IOVA mode. 00:05:08.214 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:08.214 EAL: Selected IOVA mode 'VA' 00:05:08.214 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.214 EAL: Probing VFIO support... 00:05:08.214 EAL: IOMMU type 1 (Type 1) is supported 00:05:08.214 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:08.214 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:08.214 EAL: VFIO support initialized 00:05:08.214 EAL: Ask a virtual area of 0x2e000 bytes 00:05:08.214 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:08.214 EAL: Setting up physically contiguous memory... 00:05:08.214 EAL: Setting maximum number of open files to 524288 00:05:08.214 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:08.214 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:08.214 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:08.214 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.214 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:08.214 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.214 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.214 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:08.214 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:08.214 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.214 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:08.214 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.214 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.214 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:08.214 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:08.214 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.214 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:08.214 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.214 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.214 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:08.214 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:08.214 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.214 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:08.214 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.214 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.214 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:08.214 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:08.214 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:08.214 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.214 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:08.214 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.214 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.214 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:08.214 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:08.214 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.214 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:08.214 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.214 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.214 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:08.214 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:08.214 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.214 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:08.214 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.214 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.214 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:08.214 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:08.214 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.214 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:08.214 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.214 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.214 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:08.214 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:08.214 EAL: Hugepages will be freed exactly as allocated. 00:05:08.214 EAL: No shared files mode enabled, IPC is disabled 00:05:08.214 EAL: No shared files mode enabled, IPC is disabled 00:05:08.214 EAL: TSC frequency is ~2400000 KHz 00:05:08.214 EAL: Main lcore 0 is ready (tid=7f2bbbd07a00;cpuset=[0]) 00:05:08.214 EAL: Trying to obtain current memory policy. 00:05:08.214 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.214 EAL: Restoring previous memory policy: 0 00:05:08.214 EAL: request: mp_malloc_sync 00:05:08.214 EAL: No shared files mode enabled, IPC is disabled 00:05:08.214 EAL: Heap on socket 0 was expanded by 2MB 00:05:08.214 EAL: No shared files mode enabled, IPC is disabled 00:05:08.214 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:08.214 EAL: Mem event callback 'spdk:(nil)' registered 00:05:08.214 00:05:08.214 00:05:08.214 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.214 http://cunit.sourceforge.net/ 00:05:08.214 00:05:08.214 00:05:08.214 Suite: components_suite 00:05:08.214 Test: vtophys_malloc_test ...passed 00:05:08.214 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:08.214 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.214 EAL: Restoring previous memory policy: 4 00:05:08.214 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.214 EAL: request: mp_malloc_sync 00:05:08.214 EAL: No shared files mode enabled, IPC is disabled 00:05:08.214 EAL: Heap on socket 0 was expanded by 4MB 00:05:08.214 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.214 EAL: request: mp_malloc_sync 00:05:08.214 EAL: No shared files mode enabled, IPC is disabled 00:05:08.214 EAL: Heap on socket 0 was shrunk by 4MB 00:05:08.214 EAL: Trying to obtain current memory policy. 00:05:08.214 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.214 EAL: Restoring previous memory policy: 4 00:05:08.214 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.215 EAL: request: mp_malloc_sync 00:05:08.215 EAL: No shared files mode enabled, IPC is disabled 00:05:08.215 EAL: Heap on socket 0 was expanded by 6MB 00:05:08.215 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.215 EAL: request: mp_malloc_sync 00:05:08.215 EAL: No shared files mode enabled, IPC is disabled 00:05:08.215 EAL: Heap on socket 0 was shrunk by 6MB 00:05:08.215 EAL: Trying to obtain current memory policy. 00:05:08.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.215 EAL: Restoring previous memory policy: 4 00:05:08.215 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.215 EAL: request: mp_malloc_sync 00:05:08.215 EAL: No shared files mode enabled, IPC is disabled 00:05:08.215 EAL: Heap on socket 0 was expanded by 10MB 00:05:08.215 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.215 EAL: request: mp_malloc_sync 00:05:08.215 EAL: No shared files mode enabled, IPC is disabled 00:05:08.215 EAL: Heap on socket 0 was shrunk by 10MB 00:05:08.215 EAL: Trying to obtain current memory policy. 00:05:08.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.215 EAL: Restoring previous memory policy: 4 00:05:08.215 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.215 EAL: request: mp_malloc_sync 00:05:08.215 EAL: No shared files mode enabled, IPC is disabled 00:05:08.215 EAL: Heap on socket 0 was expanded by 18MB 00:05:08.215 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.215 EAL: request: mp_malloc_sync 00:05:08.215 EAL: No shared files mode enabled, IPC is disabled 00:05:08.215 EAL: Heap on socket 0 was shrunk by 18MB 00:05:08.215 EAL: Trying to obtain current memory policy. 00:05:08.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.215 EAL: Restoring previous memory policy: 4 00:05:08.215 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.215 EAL: request: mp_malloc_sync 00:05:08.215 EAL: No shared files mode enabled, IPC is disabled 00:05:08.215 EAL: Heap on socket 0 was expanded by 34MB 00:05:08.215 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.215 EAL: request: mp_malloc_sync 00:05:08.215 EAL: No shared files mode enabled, IPC is disabled 00:05:08.215 EAL: Heap on socket 0 was shrunk by 34MB 00:05:08.215 EAL: Trying to obtain current memory policy. 00:05:08.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.215 EAL: Restoring previous memory policy: 4 00:05:08.215 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.215 EAL: request: mp_malloc_sync 00:05:08.215 EAL: No shared files mode enabled, IPC is disabled 00:05:08.215 EAL: Heap on socket 0 was expanded by 66MB 00:05:08.215 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.215 EAL: request: mp_malloc_sync 00:05:08.215 EAL: No shared files mode enabled, IPC is disabled 00:05:08.215 EAL: Heap on socket 0 was shrunk by 66MB 00:05:08.215 EAL: Trying to obtain current memory policy. 00:05:08.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.215 EAL: Restoring previous memory policy: 4 00:05:08.215 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.215 EAL: request: mp_malloc_sync 00:05:08.215 EAL: No shared files mode enabled, IPC is disabled 00:05:08.215 EAL: Heap on socket 0 was expanded by 130MB 00:05:08.215 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.215 EAL: request: mp_malloc_sync 00:05:08.215 EAL: No shared files mode enabled, IPC is disabled 00:05:08.215 EAL: Heap on socket 0 was shrunk by 130MB 00:05:08.215 EAL: Trying to obtain current memory policy. 00:05:08.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.475 EAL: Restoring previous memory policy: 4 00:05:08.475 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.475 EAL: request: mp_malloc_sync 00:05:08.475 EAL: No shared files mode enabled, IPC is disabled 00:05:08.475 EAL: Heap on socket 0 was expanded by 258MB 00:05:08.475 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.475 EAL: request: mp_malloc_sync 00:05:08.475 EAL: No shared files mode enabled, IPC is disabled 00:05:08.475 EAL: Heap on socket 0 was shrunk by 258MB 00:05:08.475 EAL: Trying to obtain current memory policy. 00:05:08.475 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.475 EAL: Restoring previous memory policy: 4 00:05:08.475 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.475 EAL: request: mp_malloc_sync 00:05:08.475 EAL: No shared files mode enabled, IPC is disabled 00:05:08.475 EAL: Heap on socket 0 was expanded by 514MB 00:05:08.475 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.475 EAL: request: mp_malloc_sync 00:05:08.475 EAL: No shared files mode enabled, IPC is disabled 00:05:08.475 EAL: Heap on socket 0 was shrunk by 514MB 00:05:08.475 EAL: Trying to obtain current memory policy. 00:05:08.475 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.735 EAL: Restoring previous memory policy: 4 00:05:08.735 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.735 EAL: request: mp_malloc_sync 00:05:08.735 EAL: No shared files mode enabled, IPC is disabled 00:05:08.735 EAL: Heap on socket 0 was expanded by 1026MB 00:05:08.735 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.995 EAL: request: mp_malloc_sync 00:05:08.995 EAL: No shared files mode enabled, IPC is disabled 00:05:08.995 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:08.995 passed 00:05:08.995 00:05:08.995 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.995 suites 1 1 n/a 0 0 00:05:08.995 tests 2 2 2 0 0 00:05:08.995 asserts 497 497 497 0 n/a 00:05:08.995 00:05:08.995 Elapsed time = 0.657 seconds 00:05:08.995 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.995 EAL: request: mp_malloc_sync 00:05:08.995 EAL: No shared files mode enabled, IPC is disabled 00:05:08.995 EAL: Heap on socket 0 was shrunk by 2MB 00:05:08.995 EAL: No shared files mode enabled, IPC is disabled 00:05:08.995 EAL: No shared files mode enabled, IPC is disabled 00:05:08.995 EAL: No shared files mode enabled, IPC is disabled 00:05:08.995 00:05:08.995 real 0m0.775s 00:05:08.995 user 0m0.408s 00:05:08.995 sys 0m0.342s 00:05:08.995 21:20:58 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.995 21:20:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:08.995 ************************************ 00:05:08.995 END TEST env_vtophys 00:05:08.995 ************************************ 00:05:08.995 21:20:58 env -- common/autotest_common.sh@1142 -- # return 0 00:05:08.995 21:20:58 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:08.995 21:20:58 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.995 21:20:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.995 21:20:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.995 ************************************ 00:05:08.995 START TEST env_pci 00:05:08.995 ************************************ 00:05:08.995 21:20:58 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:08.995 00:05:08.995 00:05:08.995 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.995 http://cunit.sourceforge.net/ 00:05:08.995 00:05:08.995 00:05:08.995 Suite: pci 00:05:08.995 Test: pci_hook ...[2024-07-15 21:20:58.702006] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1956346 has claimed it 00:05:08.995 EAL: Cannot find device (10000:00:01.0) 00:05:08.995 EAL: Failed to attach device on primary process 00:05:08.995 passed 00:05:08.995 00:05:08.995 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.995 suites 1 1 n/a 0 0 00:05:08.995 tests 1 1 1 0 0 00:05:08.995 asserts 25 25 25 0 n/a 00:05:08.995 00:05:08.995 Elapsed time = 0.031 seconds 00:05:08.995 00:05:08.995 real 0m0.052s 00:05:08.995 user 0m0.016s 00:05:08.995 sys 0m0.036s 00:05:08.995 21:20:58 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.995 21:20:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:08.995 ************************************ 00:05:08.995 END TEST env_pci 00:05:08.995 ************************************ 00:05:08.995 21:20:58 env -- common/autotest_common.sh@1142 -- # return 0 00:05:08.995 21:20:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:08.995 21:20:58 env -- env/env.sh@15 -- # uname 00:05:08.995 21:20:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:08.995 21:20:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:08.995 21:20:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:08.995 21:20:58 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:08.995 21:20:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.995 21:20:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.255 ************************************ 00:05:09.255 START TEST env_dpdk_post_init 00:05:09.255 ************************************ 00:05:09.255 21:20:58 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.255 EAL: Detected CPU lcores: 128 00:05:09.255 EAL: Detected NUMA nodes: 2 00:05:09.255 EAL: Detected shared linkage of DPDK 00:05:09.255 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:09.255 EAL: Selected IOVA mode 'VA' 00:05:09.255 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.255 EAL: VFIO support initialized 00:05:09.255 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:09.255 EAL: Using IOMMU type 1 (Type 1) 00:05:09.255 EAL: Ignore mapping IO port bar(1) 00:05:09.515 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:09.515 EAL: Ignore mapping IO port bar(1) 00:05:09.776 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:09.776 EAL: Ignore mapping IO port bar(1) 00:05:10.037 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:10.037 EAL: Ignore mapping IO port bar(1) 00:05:10.037 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:10.298 EAL: Ignore mapping IO port bar(1) 00:05:10.298 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:10.558 EAL: Ignore mapping IO port bar(1) 00:05:10.558 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:10.817 EAL: Ignore mapping IO port bar(1) 00:05:10.818 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:11.077 EAL: Ignore mapping IO port bar(1) 00:05:11.077 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:11.338 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:11.338 EAL: Ignore mapping IO port bar(1) 00:05:11.599 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:11.599 EAL: Ignore mapping IO port bar(1) 00:05:11.599 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:11.859 EAL: Ignore mapping IO port bar(1) 00:05:11.859 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:12.152 EAL: Ignore mapping IO port bar(1) 00:05:12.152 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:12.412 EAL: Ignore mapping IO port bar(1) 00:05:12.412 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:12.412 EAL: Ignore mapping IO port bar(1) 00:05:12.673 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:12.673 EAL: Ignore mapping IO port bar(1) 00:05:12.934 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:12.934 EAL: Ignore mapping IO port bar(1) 00:05:13.194 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:13.194 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:13.194 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:13.194 Starting DPDK initialization... 00:05:13.194 Starting SPDK post initialization... 00:05:13.194 SPDK NVMe probe 00:05:13.194 Attaching to 0000:65:00.0 00:05:13.194 Attached to 0000:65:00.0 00:05:13.194 Cleaning up... 00:05:15.107 00:05:15.107 real 0m5.715s 00:05:15.107 user 0m0.181s 00:05:15.107 sys 0m0.077s 00:05:15.107 21:21:04 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.107 21:21:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.107 ************************************ 00:05:15.107 END TEST env_dpdk_post_init 00:05:15.107 ************************************ 00:05:15.107 21:21:04 env -- common/autotest_common.sh@1142 -- # return 0 00:05:15.107 21:21:04 env -- env/env.sh@26 -- # uname 00:05:15.107 21:21:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:15.107 21:21:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.107 21:21:04 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.107 21:21:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.107 21:21:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.107 ************************************ 00:05:15.107 START TEST env_mem_callbacks 00:05:15.107 ************************************ 00:05:15.107 21:21:04 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.107 EAL: Detected CPU lcores: 128 00:05:15.107 EAL: Detected NUMA nodes: 2 00:05:15.107 EAL: Detected shared linkage of DPDK 00:05:15.107 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:15.107 EAL: Selected IOVA mode 'VA' 00:05:15.107 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.107 EAL: VFIO support initialized 00:05:15.107 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:15.107 00:05:15.107 00:05:15.107 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.107 http://cunit.sourceforge.net/ 00:05:15.107 00:05:15.107 00:05:15.107 Suite: memory 00:05:15.107 Test: test ... 00:05:15.107 register 0x200000200000 2097152 00:05:15.107 malloc 3145728 00:05:15.107 register 0x200000400000 4194304 00:05:15.107 buf 0x200000500000 len 3145728 PASSED 00:05:15.107 malloc 64 00:05:15.107 buf 0x2000004fff40 len 64 PASSED 00:05:15.107 malloc 4194304 00:05:15.107 register 0x200000800000 6291456 00:05:15.107 buf 0x200000a00000 len 4194304 PASSED 00:05:15.107 free 0x200000500000 3145728 00:05:15.107 free 0x2000004fff40 64 00:05:15.107 unregister 0x200000400000 4194304 PASSED 00:05:15.107 free 0x200000a00000 4194304 00:05:15.107 unregister 0x200000800000 6291456 PASSED 00:05:15.107 malloc 8388608 00:05:15.107 register 0x200000400000 10485760 00:05:15.107 buf 0x200000600000 len 8388608 PASSED 00:05:15.107 free 0x200000600000 8388608 00:05:15.107 unregister 0x200000400000 10485760 PASSED 00:05:15.107 passed 00:05:15.107 00:05:15.107 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.107 suites 1 1 n/a 0 0 00:05:15.107 tests 1 1 1 0 0 00:05:15.107 asserts 15 15 15 0 n/a 00:05:15.107 00:05:15.107 Elapsed time = 0.004 seconds 00:05:15.107 00:05:15.107 real 0m0.068s 00:05:15.107 user 0m0.023s 00:05:15.107 sys 0m0.045s 00:05:15.107 21:21:04 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.107 21:21:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:15.107 ************************************ 00:05:15.107 END TEST env_mem_callbacks 00:05:15.107 ************************************ 00:05:15.107 21:21:04 env -- common/autotest_common.sh@1142 -- # return 0 00:05:15.107 00:05:15.107 real 0m7.329s 00:05:15.107 user 0m1.029s 00:05:15.107 sys 0m0.841s 00:05:15.107 21:21:04 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.107 21:21:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.107 ************************************ 00:05:15.107 END TEST env 00:05:15.107 ************************************ 00:05:15.107 21:21:04 -- common/autotest_common.sh@1142 -- # return 0 00:05:15.107 21:21:04 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:15.107 21:21:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.107 21:21:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.107 21:21:04 -- common/autotest_common.sh@10 -- # set +x 00:05:15.107 ************************************ 00:05:15.107 START TEST rpc 00:05:15.107 ************************************ 00:05:15.107 21:21:04 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:15.107 * Looking for test storage... 00:05:15.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:15.368 21:21:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1957891 00:05:15.368 21:21:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.368 21:21:04 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:15.368 21:21:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1957891 00:05:15.368 21:21:04 rpc -- common/autotest_common.sh@829 -- # '[' -z 1957891 ']' 00:05:15.368 21:21:04 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.368 21:21:04 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.368 21:21:04 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.368 21:21:04 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.368 21:21:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.369 [2024-07-15 21:21:04.972612] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:15.369 [2024-07-15 21:21:04.972668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1957891 ] 00:05:15.369 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.369 [2024-07-15 21:21:05.032485] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.369 [2024-07-15 21:21:05.097485] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:15.369 [2024-07-15 21:21:05.097523] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1957891' to capture a snapshot of events at runtime. 00:05:15.369 [2024-07-15 21:21:05.097530] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:15.369 [2024-07-15 21:21:05.097537] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:15.369 [2024-07-15 21:21:05.097542] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1957891 for offline analysis/debug. 00:05:15.369 [2024-07-15 21:21:05.097562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.940 21:21:05 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.940 21:21:05 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:15.940 21:21:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:15.940 21:21:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:15.940 21:21:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:15.940 21:21:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:15.940 21:21:05 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.940 21:21:05 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.940 21:21:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.201 ************************************ 00:05:16.201 START TEST rpc_integrity 00:05:16.201 ************************************ 00:05:16.201 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:16.201 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:16.201 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.201 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.201 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.201 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:16.201 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:16.201 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:16.201 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:16.201 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.201 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.201 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.201 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:16.201 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:16.201 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.201 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.201 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.201 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:16.201 { 00:05:16.201 "name": "Malloc0", 00:05:16.201 "aliases": [ 00:05:16.201 "7a835180-4bda-4fde-8a32-67b0654f6209" 00:05:16.201 ], 00:05:16.201 "product_name": "Malloc disk", 00:05:16.201 "block_size": 512, 00:05:16.201 "num_blocks": 16384, 00:05:16.201 "uuid": "7a835180-4bda-4fde-8a32-67b0654f6209", 00:05:16.201 "assigned_rate_limits": { 00:05:16.201 "rw_ios_per_sec": 0, 00:05:16.201 "rw_mbytes_per_sec": 0, 00:05:16.201 "r_mbytes_per_sec": 0, 00:05:16.201 "w_mbytes_per_sec": 0 00:05:16.201 }, 00:05:16.201 "claimed": false, 00:05:16.201 "zoned": false, 00:05:16.201 "supported_io_types": { 00:05:16.201 "read": true, 00:05:16.201 "write": true, 00:05:16.201 "unmap": true, 00:05:16.201 "flush": true, 00:05:16.201 "reset": true, 00:05:16.201 "nvme_admin": false, 00:05:16.201 "nvme_io": false, 00:05:16.201 "nvme_io_md": false, 00:05:16.201 "write_zeroes": true, 00:05:16.201 "zcopy": true, 00:05:16.201 "get_zone_info": false, 00:05:16.201 "zone_management": false, 00:05:16.201 "zone_append": false, 00:05:16.201 "compare": false, 00:05:16.201 "compare_and_write": false, 00:05:16.201 "abort": true, 00:05:16.201 "seek_hole": false, 00:05:16.201 "seek_data": false, 00:05:16.201 "copy": true, 00:05:16.201 "nvme_iov_md": false 00:05:16.201 }, 00:05:16.201 "memory_domains": [ 00:05:16.201 { 00:05:16.201 "dma_device_id": "system", 00:05:16.201 "dma_device_type": 1 00:05:16.201 }, 00:05:16.201 { 00:05:16.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.201 "dma_device_type": 2 00:05:16.201 } 00:05:16.201 ], 00:05:16.201 "driver_specific": {} 00:05:16.201 } 00:05:16.201 ]' 00:05:16.201 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:16.201 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:16.201 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:16.201 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.201 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.201 [2024-07-15 21:21:05.911310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:16.201 [2024-07-15 21:21:05.911342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:16.201 [2024-07-15 21:21:05.911355] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8efe50 00:05:16.201 [2024-07-15 21:21:05.911363] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:16.201 [2024-07-15 21:21:05.912694] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:16.201 [2024-07-15 21:21:05.912715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:16.201 Passthru0 00:05:16.201 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.201 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:16.201 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.201 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.201 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.201 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:16.201 { 00:05:16.201 "name": "Malloc0", 00:05:16.201 "aliases": [ 00:05:16.202 "7a835180-4bda-4fde-8a32-67b0654f6209" 00:05:16.202 ], 00:05:16.202 "product_name": "Malloc disk", 00:05:16.202 "block_size": 512, 00:05:16.202 "num_blocks": 16384, 00:05:16.202 "uuid": "7a835180-4bda-4fde-8a32-67b0654f6209", 00:05:16.202 "assigned_rate_limits": { 00:05:16.202 "rw_ios_per_sec": 0, 00:05:16.202 "rw_mbytes_per_sec": 0, 00:05:16.202 "r_mbytes_per_sec": 0, 00:05:16.202 "w_mbytes_per_sec": 0 00:05:16.202 }, 00:05:16.202 "claimed": true, 00:05:16.202 "claim_type": "exclusive_write", 00:05:16.202 "zoned": false, 00:05:16.202 "supported_io_types": { 00:05:16.202 "read": true, 00:05:16.202 "write": true, 00:05:16.202 "unmap": true, 00:05:16.202 "flush": true, 00:05:16.202 "reset": true, 00:05:16.202 "nvme_admin": false, 00:05:16.202 "nvme_io": false, 00:05:16.202 "nvme_io_md": false, 00:05:16.202 "write_zeroes": true, 00:05:16.202 "zcopy": true, 00:05:16.202 "get_zone_info": false, 00:05:16.202 "zone_management": false, 00:05:16.202 "zone_append": false, 00:05:16.202 "compare": false, 00:05:16.202 "compare_and_write": false, 00:05:16.202 "abort": true, 00:05:16.202 "seek_hole": false, 00:05:16.202 "seek_data": false, 00:05:16.202 "copy": true, 00:05:16.202 "nvme_iov_md": false 00:05:16.202 }, 00:05:16.202 "memory_domains": [ 00:05:16.202 { 00:05:16.202 "dma_device_id": "system", 00:05:16.202 "dma_device_type": 1 00:05:16.202 }, 00:05:16.202 { 00:05:16.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.202 "dma_device_type": 2 00:05:16.202 } 00:05:16.202 ], 00:05:16.202 "driver_specific": {} 00:05:16.202 }, 00:05:16.202 { 00:05:16.202 "name": "Passthru0", 00:05:16.202 "aliases": [ 00:05:16.202 "2911a007-fc96-56c2-a7a5-26642384b007" 00:05:16.202 ], 00:05:16.202 "product_name": "passthru", 00:05:16.202 "block_size": 512, 00:05:16.202 "num_blocks": 16384, 00:05:16.202 "uuid": "2911a007-fc96-56c2-a7a5-26642384b007", 00:05:16.202 "assigned_rate_limits": { 00:05:16.202 "rw_ios_per_sec": 0, 00:05:16.202 "rw_mbytes_per_sec": 0, 00:05:16.202 "r_mbytes_per_sec": 0, 00:05:16.202 "w_mbytes_per_sec": 0 00:05:16.202 }, 00:05:16.202 "claimed": false, 00:05:16.202 "zoned": false, 00:05:16.202 "supported_io_types": { 00:05:16.202 "read": true, 00:05:16.202 "write": true, 00:05:16.202 "unmap": true, 00:05:16.202 "flush": true, 00:05:16.202 "reset": true, 00:05:16.202 "nvme_admin": false, 00:05:16.202 "nvme_io": false, 00:05:16.202 "nvme_io_md": false, 00:05:16.202 "write_zeroes": true, 00:05:16.202 "zcopy": true, 00:05:16.202 "get_zone_info": false, 00:05:16.202 "zone_management": false, 00:05:16.202 "zone_append": false, 00:05:16.202 "compare": false, 00:05:16.202 "compare_and_write": false, 00:05:16.202 "abort": true, 00:05:16.202 "seek_hole": false, 00:05:16.202 "seek_data": false, 00:05:16.202 "copy": true, 00:05:16.202 "nvme_iov_md": false 00:05:16.202 }, 00:05:16.202 "memory_domains": [ 00:05:16.202 { 00:05:16.202 "dma_device_id": "system", 00:05:16.202 "dma_device_type": 1 00:05:16.202 }, 00:05:16.202 { 00:05:16.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.202 "dma_device_type": 2 00:05:16.202 } 00:05:16.202 ], 00:05:16.202 "driver_specific": { 00:05:16.202 "passthru": { 00:05:16.202 "name": "Passthru0", 00:05:16.202 "base_bdev_name": "Malloc0" 00:05:16.202 } 00:05:16.202 } 00:05:16.202 } 00:05:16.202 ]' 00:05:16.202 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:16.202 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:16.202 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:16.202 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.202 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.202 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.202 21:21:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:16.202 21:21:05 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.202 21:21:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.464 21:21:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.464 21:21:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:16.464 21:21:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.464 21:21:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.464 21:21:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.464 21:21:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:16.464 21:21:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:16.464 21:21:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:16.464 00:05:16.464 real 0m0.290s 00:05:16.464 user 0m0.190s 00:05:16.464 sys 0m0.036s 00:05:16.464 21:21:06 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.464 21:21:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.464 ************************************ 00:05:16.464 END TEST rpc_integrity 00:05:16.464 ************************************ 00:05:16.464 21:21:06 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.464 21:21:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:16.464 21:21:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.464 21:21:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.464 21:21:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.464 ************************************ 00:05:16.464 START TEST rpc_plugins 00:05:16.464 ************************************ 00:05:16.464 21:21:06 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:16.464 21:21:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:16.464 21:21:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.464 21:21:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.464 21:21:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.464 21:21:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:16.464 21:21:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:16.464 21:21:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.464 21:21:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.464 21:21:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.464 21:21:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:16.464 { 00:05:16.464 "name": "Malloc1", 00:05:16.464 "aliases": [ 00:05:16.464 "6bb1eaa4-5324-4baf-8508-2882bf5757bf" 00:05:16.464 ], 00:05:16.464 "product_name": "Malloc disk", 00:05:16.464 "block_size": 4096, 00:05:16.464 "num_blocks": 256, 00:05:16.464 "uuid": "6bb1eaa4-5324-4baf-8508-2882bf5757bf", 00:05:16.464 "assigned_rate_limits": { 00:05:16.464 "rw_ios_per_sec": 0, 00:05:16.464 "rw_mbytes_per_sec": 0, 00:05:16.464 "r_mbytes_per_sec": 0, 00:05:16.464 "w_mbytes_per_sec": 0 00:05:16.464 }, 00:05:16.464 "claimed": false, 00:05:16.464 "zoned": false, 00:05:16.464 "supported_io_types": { 00:05:16.464 "read": true, 00:05:16.464 "write": true, 00:05:16.464 "unmap": true, 00:05:16.464 "flush": true, 00:05:16.464 "reset": true, 00:05:16.464 "nvme_admin": false, 00:05:16.464 "nvme_io": false, 00:05:16.464 "nvme_io_md": false, 00:05:16.464 "write_zeroes": true, 00:05:16.464 "zcopy": true, 00:05:16.464 "get_zone_info": false, 00:05:16.464 "zone_management": false, 00:05:16.464 "zone_append": false, 00:05:16.464 "compare": false, 00:05:16.464 "compare_and_write": false, 00:05:16.464 "abort": true, 00:05:16.464 "seek_hole": false, 00:05:16.464 "seek_data": false, 00:05:16.464 "copy": true, 00:05:16.464 "nvme_iov_md": false 00:05:16.464 }, 00:05:16.464 "memory_domains": [ 00:05:16.464 { 00:05:16.464 "dma_device_id": "system", 00:05:16.464 "dma_device_type": 1 00:05:16.464 }, 00:05:16.464 { 00:05:16.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.464 "dma_device_type": 2 00:05:16.464 } 00:05:16.464 ], 00:05:16.464 "driver_specific": {} 00:05:16.464 } 00:05:16.464 ]' 00:05:16.464 21:21:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:16.464 21:21:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:16.464 21:21:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:16.464 21:21:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.464 21:21:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.464 21:21:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.464 21:21:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:16.465 21:21:06 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.465 21:21:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.465 21:21:06 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.465 21:21:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:16.465 21:21:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:16.725 21:21:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:16.725 00:05:16.725 real 0m0.140s 00:05:16.725 user 0m0.086s 00:05:16.725 sys 0m0.018s 00:05:16.725 21:21:06 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.725 21:21:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.725 ************************************ 00:05:16.725 END TEST rpc_plugins 00:05:16.725 ************************************ 00:05:16.725 21:21:06 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.725 21:21:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:16.725 21:21:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.725 21:21:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.725 21:21:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.725 ************************************ 00:05:16.725 START TEST rpc_trace_cmd_test 00:05:16.725 ************************************ 00:05:16.725 21:21:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:16.725 21:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:16.725 21:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:16.725 21:21:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.725 21:21:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:16.725 21:21:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.725 21:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:16.725 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1957891", 00:05:16.725 "tpoint_group_mask": "0x8", 00:05:16.725 "iscsi_conn": { 00:05:16.725 "mask": "0x2", 00:05:16.725 "tpoint_mask": "0x0" 00:05:16.725 }, 00:05:16.725 "scsi": { 00:05:16.725 "mask": "0x4", 00:05:16.725 "tpoint_mask": "0x0" 00:05:16.725 }, 00:05:16.725 "bdev": { 00:05:16.725 "mask": "0x8", 00:05:16.725 "tpoint_mask": "0xffffffffffffffff" 00:05:16.725 }, 00:05:16.725 "nvmf_rdma": { 00:05:16.725 "mask": "0x10", 00:05:16.725 "tpoint_mask": "0x0" 00:05:16.725 }, 00:05:16.725 "nvmf_tcp": { 00:05:16.725 "mask": "0x20", 00:05:16.725 "tpoint_mask": "0x0" 00:05:16.725 }, 00:05:16.725 "ftl": { 00:05:16.725 "mask": "0x40", 00:05:16.725 "tpoint_mask": "0x0" 00:05:16.725 }, 00:05:16.725 "blobfs": { 00:05:16.725 "mask": "0x80", 00:05:16.725 "tpoint_mask": "0x0" 00:05:16.725 }, 00:05:16.725 "dsa": { 00:05:16.725 "mask": "0x200", 00:05:16.725 "tpoint_mask": "0x0" 00:05:16.725 }, 00:05:16.725 "thread": { 00:05:16.725 "mask": "0x400", 00:05:16.725 "tpoint_mask": "0x0" 00:05:16.725 }, 00:05:16.725 "nvme_pcie": { 00:05:16.725 "mask": "0x800", 00:05:16.725 "tpoint_mask": "0x0" 00:05:16.725 }, 00:05:16.725 "iaa": { 00:05:16.725 "mask": "0x1000", 00:05:16.725 "tpoint_mask": "0x0" 00:05:16.725 }, 00:05:16.725 "nvme_tcp": { 00:05:16.725 "mask": "0x2000", 00:05:16.725 "tpoint_mask": "0x0" 00:05:16.725 }, 00:05:16.725 "bdev_nvme": { 00:05:16.725 "mask": "0x4000", 00:05:16.725 "tpoint_mask": "0x0" 00:05:16.725 }, 00:05:16.725 "sock": { 00:05:16.725 "mask": "0x8000", 00:05:16.725 "tpoint_mask": "0x0" 00:05:16.725 } 00:05:16.725 }' 00:05:16.725 21:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:16.725 21:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:16.725 21:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:16.725 21:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:16.725 21:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:16.725 21:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:16.725 21:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:16.984 21:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:16.984 21:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:16.984 21:21:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:16.984 00:05:16.984 real 0m0.245s 00:05:16.984 user 0m0.206s 00:05:16.984 sys 0m0.029s 00:05:16.984 21:21:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.984 21:21:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:16.984 ************************************ 00:05:16.984 END TEST rpc_trace_cmd_test 00:05:16.984 ************************************ 00:05:16.984 21:21:06 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.984 21:21:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:16.984 21:21:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:16.984 21:21:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:16.984 21:21:06 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.984 21:21:06 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.984 21:21:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.984 ************************************ 00:05:16.984 START TEST rpc_daemon_integrity 00:05:16.984 ************************************ 00:05:16.984 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:16.984 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:16.984 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.985 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.985 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.985 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:16.985 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:16.985 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:16.985 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:16.985 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.985 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.985 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.985 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:16.985 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:16.985 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.985 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.985 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.985 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:16.985 { 00:05:16.985 "name": "Malloc2", 00:05:16.985 "aliases": [ 00:05:16.985 "04ec3ee4-678b-4cd0-86a7-fe30341ec476" 00:05:16.985 ], 00:05:16.985 "product_name": "Malloc disk", 00:05:16.985 "block_size": 512, 00:05:16.985 "num_blocks": 16384, 00:05:16.985 "uuid": "04ec3ee4-678b-4cd0-86a7-fe30341ec476", 00:05:16.985 "assigned_rate_limits": { 00:05:16.985 "rw_ios_per_sec": 0, 00:05:16.985 "rw_mbytes_per_sec": 0, 00:05:16.985 "r_mbytes_per_sec": 0, 00:05:16.985 "w_mbytes_per_sec": 0 00:05:16.985 }, 00:05:16.985 "claimed": false, 00:05:16.985 "zoned": false, 00:05:16.985 "supported_io_types": { 00:05:16.985 "read": true, 00:05:16.985 "write": true, 00:05:16.985 "unmap": true, 00:05:16.985 "flush": true, 00:05:16.985 "reset": true, 00:05:16.985 "nvme_admin": false, 00:05:16.985 "nvme_io": false, 00:05:16.985 "nvme_io_md": false, 00:05:16.985 "write_zeroes": true, 00:05:16.985 "zcopy": true, 00:05:16.985 "get_zone_info": false, 00:05:16.985 "zone_management": false, 00:05:16.985 "zone_append": false, 00:05:16.985 "compare": false, 00:05:16.985 "compare_and_write": false, 00:05:16.985 "abort": true, 00:05:16.985 "seek_hole": false, 00:05:16.985 "seek_data": false, 00:05:16.985 "copy": true, 00:05:16.985 "nvme_iov_md": false 00:05:16.985 }, 00:05:16.985 "memory_domains": [ 00:05:16.985 { 00:05:16.985 "dma_device_id": "system", 00:05:16.985 "dma_device_type": 1 00:05:16.985 }, 00:05:16.985 { 00:05:16.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.985 "dma_device_type": 2 00:05:16.985 } 00:05:16.985 ], 00:05:16.985 "driver_specific": {} 00:05:16.985 } 00:05:16.985 ]' 00:05:16.985 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.244 [2024-07-15 21:21:06.817785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:17.244 [2024-07-15 21:21:06.817817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.244 [2024-07-15 21:21:06.817829] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8f07c0 00:05:17.244 [2024-07-15 21:21:06.817836] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.244 [2024-07-15 21:21:06.819048] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.244 [2024-07-15 21:21:06.819072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.244 Passthru0 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.244 { 00:05:17.244 "name": "Malloc2", 00:05:17.244 "aliases": [ 00:05:17.244 "04ec3ee4-678b-4cd0-86a7-fe30341ec476" 00:05:17.244 ], 00:05:17.244 "product_name": "Malloc disk", 00:05:17.244 "block_size": 512, 00:05:17.244 "num_blocks": 16384, 00:05:17.244 "uuid": "04ec3ee4-678b-4cd0-86a7-fe30341ec476", 00:05:17.244 "assigned_rate_limits": { 00:05:17.244 "rw_ios_per_sec": 0, 00:05:17.244 "rw_mbytes_per_sec": 0, 00:05:17.244 "r_mbytes_per_sec": 0, 00:05:17.244 "w_mbytes_per_sec": 0 00:05:17.244 }, 00:05:17.244 "claimed": true, 00:05:17.244 "claim_type": "exclusive_write", 00:05:17.244 "zoned": false, 00:05:17.244 "supported_io_types": { 00:05:17.244 "read": true, 00:05:17.244 "write": true, 00:05:17.244 "unmap": true, 00:05:17.244 "flush": true, 00:05:17.244 "reset": true, 00:05:17.244 "nvme_admin": false, 00:05:17.244 "nvme_io": false, 00:05:17.244 "nvme_io_md": false, 00:05:17.244 "write_zeroes": true, 00:05:17.244 "zcopy": true, 00:05:17.244 "get_zone_info": false, 00:05:17.244 "zone_management": false, 00:05:17.244 "zone_append": false, 00:05:17.244 "compare": false, 00:05:17.244 "compare_and_write": false, 00:05:17.244 "abort": true, 00:05:17.244 "seek_hole": false, 00:05:17.244 "seek_data": false, 00:05:17.244 "copy": true, 00:05:17.244 "nvme_iov_md": false 00:05:17.244 }, 00:05:17.244 "memory_domains": [ 00:05:17.244 { 00:05:17.244 "dma_device_id": "system", 00:05:17.244 "dma_device_type": 1 00:05:17.244 }, 00:05:17.244 { 00:05:17.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.244 "dma_device_type": 2 00:05:17.244 } 00:05:17.244 ], 00:05:17.244 "driver_specific": {} 00:05:17.244 }, 00:05:17.244 { 00:05:17.244 "name": "Passthru0", 00:05:17.244 "aliases": [ 00:05:17.244 "f7c5a792-d12c-554b-a164-6e6a9eae918a" 00:05:17.244 ], 00:05:17.244 "product_name": "passthru", 00:05:17.244 "block_size": 512, 00:05:17.244 "num_blocks": 16384, 00:05:17.244 "uuid": "f7c5a792-d12c-554b-a164-6e6a9eae918a", 00:05:17.244 "assigned_rate_limits": { 00:05:17.244 "rw_ios_per_sec": 0, 00:05:17.244 "rw_mbytes_per_sec": 0, 00:05:17.244 "r_mbytes_per_sec": 0, 00:05:17.244 "w_mbytes_per_sec": 0 00:05:17.244 }, 00:05:17.244 "claimed": false, 00:05:17.244 "zoned": false, 00:05:17.244 "supported_io_types": { 00:05:17.244 "read": true, 00:05:17.244 "write": true, 00:05:17.244 "unmap": true, 00:05:17.244 "flush": true, 00:05:17.244 "reset": true, 00:05:17.244 "nvme_admin": false, 00:05:17.244 "nvme_io": false, 00:05:17.244 "nvme_io_md": false, 00:05:17.244 "write_zeroes": true, 00:05:17.244 "zcopy": true, 00:05:17.244 "get_zone_info": false, 00:05:17.244 "zone_management": false, 00:05:17.244 "zone_append": false, 00:05:17.244 "compare": false, 00:05:17.244 "compare_and_write": false, 00:05:17.244 "abort": true, 00:05:17.244 "seek_hole": false, 00:05:17.244 "seek_data": false, 00:05:17.244 "copy": true, 00:05:17.244 "nvme_iov_md": false 00:05:17.244 }, 00:05:17.244 "memory_domains": [ 00:05:17.244 { 00:05:17.244 "dma_device_id": "system", 00:05:17.244 "dma_device_type": 1 00:05:17.244 }, 00:05:17.244 { 00:05:17.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.244 "dma_device_type": 2 00:05:17.244 } 00:05:17.244 ], 00:05:17.244 "driver_specific": { 00:05:17.244 "passthru": { 00:05:17.244 "name": "Passthru0", 00:05:17.244 "base_bdev_name": "Malloc2" 00:05:17.244 } 00:05:17.244 } 00:05:17.244 } 00:05:17.244 ]' 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.244 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.245 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.245 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.245 21:21:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.245 00:05:17.245 real 0m0.297s 00:05:17.245 user 0m0.195s 00:05:17.245 sys 0m0.039s 00:05:17.245 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.245 21:21:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.245 ************************************ 00:05:17.245 END TEST rpc_daemon_integrity 00:05:17.245 ************************************ 00:05:17.245 21:21:07 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.245 21:21:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:17.245 21:21:07 rpc -- rpc/rpc.sh@84 -- # killprocess 1957891 00:05:17.245 21:21:07 rpc -- common/autotest_common.sh@948 -- # '[' -z 1957891 ']' 00:05:17.245 21:21:07 rpc -- common/autotest_common.sh@952 -- # kill -0 1957891 00:05:17.245 21:21:07 rpc -- common/autotest_common.sh@953 -- # uname 00:05:17.245 21:21:07 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.245 21:21:07 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1957891 00:05:17.504 21:21:07 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.504 21:21:07 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.504 21:21:07 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1957891' 00:05:17.504 killing process with pid 1957891 00:05:17.504 21:21:07 rpc -- common/autotest_common.sh@967 -- # kill 1957891 00:05:17.504 21:21:07 rpc -- common/autotest_common.sh@972 -- # wait 1957891 00:05:17.504 00:05:17.504 real 0m2.462s 00:05:17.504 user 0m3.256s 00:05:17.504 sys 0m0.672s 00:05:17.504 21:21:07 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.504 21:21:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.504 ************************************ 00:05:17.504 END TEST rpc 00:05:17.504 ************************************ 00:05:17.763 21:21:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:17.763 21:21:07 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:17.763 21:21:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.763 21:21:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.763 21:21:07 -- common/autotest_common.sh@10 -- # set +x 00:05:17.763 ************************************ 00:05:17.763 START TEST skip_rpc 00:05:17.763 ************************************ 00:05:17.763 21:21:07 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:17.763 * Looking for test storage... 00:05:17.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:17.763 21:21:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:17.763 21:21:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:17.763 21:21:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:17.763 21:21:07 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.763 21:21:07 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.763 21:21:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.763 ************************************ 00:05:17.763 START TEST skip_rpc 00:05:17.763 ************************************ 00:05:17.763 21:21:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:17.763 21:21:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1958420 00:05:17.763 21:21:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.763 21:21:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:17.763 21:21:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:17.763 [2024-07-15 21:21:07.547221] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:17.763 [2024-07-15 21:21:07.547277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1958420 ] 00:05:18.023 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.023 [2024-07-15 21:21:07.607764] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.023 [2024-07-15 21:21:07.675823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.302 21:21:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1958420 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1958420 ']' 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1958420 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1958420 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1958420' 00:05:23.303 killing process with pid 1958420 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1958420 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1958420 00:05:23.303 00:05:23.303 real 0m5.279s 00:05:23.303 user 0m5.089s 00:05:23.303 sys 0m0.223s 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.303 21:21:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.303 ************************************ 00:05:23.303 END TEST skip_rpc 00:05:23.303 ************************************ 00:05:23.303 21:21:12 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:23.303 21:21:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:23.303 21:21:12 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.303 21:21:12 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.303 21:21:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.303 ************************************ 00:05:23.303 START TEST skip_rpc_with_json 00:05:23.303 ************************************ 00:05:23.303 21:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:23.303 21:21:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:23.303 21:21:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1960042 00:05:23.303 21:21:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.303 21:21:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1960042 00:05:23.303 21:21:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.303 21:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1960042 ']' 00:05:23.303 21:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.303 21:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.303 21:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.303 21:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.303 21:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.303 [2024-07-15 21:21:12.895528] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:23.303 [2024-07-15 21:21:12.895581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1960042 ] 00:05:23.303 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.303 [2024-07-15 21:21:12.958893] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.303 [2024-07-15 21:21:13.033723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.893 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.893 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:23.893 21:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:23.893 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.893 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.893 [2024-07-15 21:21:13.661055] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:23.893 request: 00:05:23.893 { 00:05:23.893 "trtype": "tcp", 00:05:23.893 "method": "nvmf_get_transports", 00:05:23.893 "req_id": 1 00:05:23.893 } 00:05:23.893 Got JSON-RPC error response 00:05:23.893 response: 00:05:23.893 { 00:05:23.893 "code": -19, 00:05:23.893 "message": "No such device" 00:05:23.893 } 00:05:23.893 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:23.893 21:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:23.893 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.893 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.893 [2024-07-15 21:21:13.673179] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:23.893 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.893 21:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:23.893 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.893 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.153 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.153 21:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:24.153 { 00:05:24.153 "subsystems": [ 00:05:24.153 { 00:05:24.153 "subsystem": "vfio_user_target", 00:05:24.153 "config": null 00:05:24.153 }, 00:05:24.153 { 00:05:24.153 "subsystem": "keyring", 00:05:24.153 "config": [] 00:05:24.153 }, 00:05:24.153 { 00:05:24.153 "subsystem": "iobuf", 00:05:24.153 "config": [ 00:05:24.153 { 00:05:24.153 "method": "iobuf_set_options", 00:05:24.153 "params": { 00:05:24.153 "small_pool_count": 8192, 00:05:24.153 "large_pool_count": 1024, 00:05:24.153 "small_bufsize": 8192, 00:05:24.153 "large_bufsize": 135168 00:05:24.153 } 00:05:24.153 } 00:05:24.153 ] 00:05:24.153 }, 00:05:24.153 { 00:05:24.153 "subsystem": "sock", 00:05:24.153 "config": [ 00:05:24.153 { 00:05:24.153 "method": "sock_set_default_impl", 00:05:24.153 "params": { 00:05:24.153 "impl_name": "posix" 00:05:24.153 } 00:05:24.153 }, 00:05:24.153 { 00:05:24.153 "method": "sock_impl_set_options", 00:05:24.153 "params": { 00:05:24.153 "impl_name": "ssl", 00:05:24.153 "recv_buf_size": 4096, 00:05:24.153 "send_buf_size": 4096, 00:05:24.153 "enable_recv_pipe": true, 00:05:24.153 "enable_quickack": false, 00:05:24.153 "enable_placement_id": 0, 00:05:24.153 "enable_zerocopy_send_server": true, 00:05:24.153 "enable_zerocopy_send_client": false, 00:05:24.153 "zerocopy_threshold": 0, 00:05:24.153 "tls_version": 0, 00:05:24.153 "enable_ktls": false 00:05:24.153 } 00:05:24.153 }, 00:05:24.153 { 00:05:24.153 "method": "sock_impl_set_options", 00:05:24.153 "params": { 00:05:24.153 "impl_name": "posix", 00:05:24.153 "recv_buf_size": 2097152, 00:05:24.153 "send_buf_size": 2097152, 00:05:24.153 "enable_recv_pipe": true, 00:05:24.153 "enable_quickack": false, 00:05:24.153 "enable_placement_id": 0, 00:05:24.153 "enable_zerocopy_send_server": true, 00:05:24.153 "enable_zerocopy_send_client": false, 00:05:24.153 "zerocopy_threshold": 0, 00:05:24.153 "tls_version": 0, 00:05:24.153 "enable_ktls": false 00:05:24.153 } 00:05:24.153 } 00:05:24.153 ] 00:05:24.153 }, 00:05:24.153 { 00:05:24.153 "subsystem": "vmd", 00:05:24.153 "config": [] 00:05:24.153 }, 00:05:24.153 { 00:05:24.153 "subsystem": "accel", 00:05:24.153 "config": [ 00:05:24.153 { 00:05:24.153 "method": "accel_set_options", 00:05:24.153 "params": { 00:05:24.153 "small_cache_size": 128, 00:05:24.153 "large_cache_size": 16, 00:05:24.153 "task_count": 2048, 00:05:24.153 "sequence_count": 2048, 00:05:24.153 "buf_count": 2048 00:05:24.153 } 00:05:24.153 } 00:05:24.153 ] 00:05:24.153 }, 00:05:24.153 { 00:05:24.154 "subsystem": "bdev", 00:05:24.154 "config": [ 00:05:24.154 { 00:05:24.154 "method": "bdev_set_options", 00:05:24.154 "params": { 00:05:24.154 "bdev_io_pool_size": 65535, 00:05:24.154 "bdev_io_cache_size": 256, 00:05:24.154 "bdev_auto_examine": true, 00:05:24.154 "iobuf_small_cache_size": 128, 00:05:24.154 "iobuf_large_cache_size": 16 00:05:24.154 } 00:05:24.154 }, 00:05:24.154 { 00:05:24.154 "method": "bdev_raid_set_options", 00:05:24.154 "params": { 00:05:24.154 "process_window_size_kb": 1024 00:05:24.154 } 00:05:24.154 }, 00:05:24.154 { 00:05:24.154 "method": "bdev_iscsi_set_options", 00:05:24.154 "params": { 00:05:24.154 "timeout_sec": 30 00:05:24.154 } 00:05:24.154 }, 00:05:24.154 { 00:05:24.154 "method": "bdev_nvme_set_options", 00:05:24.154 "params": { 00:05:24.154 "action_on_timeout": "none", 00:05:24.154 "timeout_us": 0, 00:05:24.154 "timeout_admin_us": 0, 00:05:24.154 "keep_alive_timeout_ms": 10000, 00:05:24.154 "arbitration_burst": 0, 00:05:24.154 "low_priority_weight": 0, 00:05:24.154 "medium_priority_weight": 0, 00:05:24.154 "high_priority_weight": 0, 00:05:24.154 "nvme_adminq_poll_period_us": 10000, 00:05:24.154 "nvme_ioq_poll_period_us": 0, 00:05:24.154 "io_queue_requests": 0, 00:05:24.154 "delay_cmd_submit": true, 00:05:24.154 "transport_retry_count": 4, 00:05:24.154 "bdev_retry_count": 3, 00:05:24.154 "transport_ack_timeout": 0, 00:05:24.154 "ctrlr_loss_timeout_sec": 0, 00:05:24.154 "reconnect_delay_sec": 0, 00:05:24.154 "fast_io_fail_timeout_sec": 0, 00:05:24.154 "disable_auto_failback": false, 00:05:24.154 "generate_uuids": false, 00:05:24.154 "transport_tos": 0, 00:05:24.154 "nvme_error_stat": false, 00:05:24.154 "rdma_srq_size": 0, 00:05:24.154 "io_path_stat": false, 00:05:24.154 "allow_accel_sequence": false, 00:05:24.154 "rdma_max_cq_size": 0, 00:05:24.154 "rdma_cm_event_timeout_ms": 0, 00:05:24.154 "dhchap_digests": [ 00:05:24.154 "sha256", 00:05:24.154 "sha384", 00:05:24.154 "sha512" 00:05:24.154 ], 00:05:24.154 "dhchap_dhgroups": [ 00:05:24.154 "null", 00:05:24.154 "ffdhe2048", 00:05:24.154 "ffdhe3072", 00:05:24.154 "ffdhe4096", 00:05:24.154 "ffdhe6144", 00:05:24.154 "ffdhe8192" 00:05:24.154 ] 00:05:24.154 } 00:05:24.154 }, 00:05:24.154 { 00:05:24.154 "method": "bdev_nvme_set_hotplug", 00:05:24.154 "params": { 00:05:24.154 "period_us": 100000, 00:05:24.154 "enable": false 00:05:24.154 } 00:05:24.154 }, 00:05:24.154 { 00:05:24.154 "method": "bdev_wait_for_examine" 00:05:24.154 } 00:05:24.154 ] 00:05:24.154 }, 00:05:24.154 { 00:05:24.154 "subsystem": "scsi", 00:05:24.154 "config": null 00:05:24.154 }, 00:05:24.154 { 00:05:24.154 "subsystem": "scheduler", 00:05:24.154 "config": [ 00:05:24.154 { 00:05:24.154 "method": "framework_set_scheduler", 00:05:24.154 "params": { 00:05:24.154 "name": "static" 00:05:24.154 } 00:05:24.154 } 00:05:24.154 ] 00:05:24.154 }, 00:05:24.154 { 00:05:24.154 "subsystem": "vhost_scsi", 00:05:24.154 "config": [] 00:05:24.154 }, 00:05:24.154 { 00:05:24.154 "subsystem": "vhost_blk", 00:05:24.154 "config": [] 00:05:24.154 }, 00:05:24.154 { 00:05:24.154 "subsystem": "ublk", 00:05:24.154 "config": [] 00:05:24.154 }, 00:05:24.154 { 00:05:24.154 "subsystem": "nbd", 00:05:24.154 "config": [] 00:05:24.154 }, 00:05:24.154 { 00:05:24.154 "subsystem": "nvmf", 00:05:24.154 "config": [ 00:05:24.154 { 00:05:24.154 "method": "nvmf_set_config", 00:05:24.154 "params": { 00:05:24.154 "discovery_filter": "match_any", 00:05:24.154 "admin_cmd_passthru": { 00:05:24.154 "identify_ctrlr": false 00:05:24.154 } 00:05:24.154 } 00:05:24.154 }, 00:05:24.154 { 00:05:24.154 "method": "nvmf_set_max_subsystems", 00:05:24.154 "params": { 00:05:24.154 "max_subsystems": 1024 00:05:24.154 } 00:05:24.154 }, 00:05:24.154 { 00:05:24.154 "method": "nvmf_set_crdt", 00:05:24.154 "params": { 00:05:24.154 "crdt1": 0, 00:05:24.154 "crdt2": 0, 00:05:24.154 "crdt3": 0 00:05:24.154 } 00:05:24.154 }, 00:05:24.154 { 00:05:24.154 "method": "nvmf_create_transport", 00:05:24.154 "params": { 00:05:24.154 "trtype": "TCP", 00:05:24.154 "max_queue_depth": 128, 00:05:24.154 "max_io_qpairs_per_ctrlr": 127, 00:05:24.154 "in_capsule_data_size": 4096, 00:05:24.154 "max_io_size": 131072, 00:05:24.154 "io_unit_size": 131072, 00:05:24.154 "max_aq_depth": 128, 00:05:24.154 "num_shared_buffers": 511, 00:05:24.154 "buf_cache_size": 4294967295, 00:05:24.154 "dif_insert_or_strip": false, 00:05:24.154 "zcopy": false, 00:05:24.154 "c2h_success": true, 00:05:24.154 "sock_priority": 0, 00:05:24.154 "abort_timeout_sec": 1, 00:05:24.154 "ack_timeout": 0, 00:05:24.154 "data_wr_pool_size": 0 00:05:24.154 } 00:05:24.154 } 00:05:24.154 ] 00:05:24.154 }, 00:05:24.154 { 00:05:24.154 "subsystem": "iscsi", 00:05:24.154 "config": [ 00:05:24.154 { 00:05:24.154 "method": "iscsi_set_options", 00:05:24.154 "params": { 00:05:24.154 "node_base": "iqn.2016-06.io.spdk", 00:05:24.154 "max_sessions": 128, 00:05:24.154 "max_connections_per_session": 2, 00:05:24.154 "max_queue_depth": 64, 00:05:24.154 "default_time2wait": 2, 00:05:24.154 "default_time2retain": 20, 00:05:24.154 "first_burst_length": 8192, 00:05:24.154 "immediate_data": true, 00:05:24.154 "allow_duplicated_isid": false, 00:05:24.154 "error_recovery_level": 0, 00:05:24.154 "nop_timeout": 60, 00:05:24.154 "nop_in_interval": 30, 00:05:24.154 "disable_chap": false, 00:05:24.154 "require_chap": false, 00:05:24.154 "mutual_chap": false, 00:05:24.154 "chap_group": 0, 00:05:24.154 "max_large_datain_per_connection": 64, 00:05:24.154 "max_r2t_per_connection": 4, 00:05:24.154 "pdu_pool_size": 36864, 00:05:24.154 "immediate_data_pool_size": 16384, 00:05:24.154 "data_out_pool_size": 2048 00:05:24.154 } 00:05:24.154 } 00:05:24.154 ] 00:05:24.154 } 00:05:24.154 ] 00:05:24.154 } 00:05:24.154 21:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:24.154 21:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1960042 00:05:24.154 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1960042 ']' 00:05:24.154 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1960042 00:05:24.154 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:24.154 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.154 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1960042 00:05:24.154 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.154 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.154 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1960042' 00:05:24.154 killing process with pid 1960042 00:05:24.154 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1960042 00:05:24.154 21:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1960042 00:05:24.415 21:21:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1960263 00:05:24.415 21:21:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:24.415 21:21:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1960263 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1960263 ']' 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1960263 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1960263 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1960263' 00:05:29.701 killing process with pid 1960263 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1960263 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1960263 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:29.701 00:05:29.701 real 0m6.539s 00:05:29.701 user 0m6.422s 00:05:29.701 sys 0m0.524s 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.701 ************************************ 00:05:29.701 END TEST skip_rpc_with_json 00:05:29.701 ************************************ 00:05:29.701 21:21:19 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:29.701 21:21:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:29.701 21:21:19 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.701 21:21:19 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.701 21:21:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.701 ************************************ 00:05:29.701 START TEST skip_rpc_with_delay 00:05:29.701 ************************************ 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:29.701 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:29.963 [2024-07-15 21:21:19.513184] app.c: 837:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:29.963 [2024-07-15 21:21:19.513271] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:29.963 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:29.963 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.963 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:29.963 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.963 00:05:29.963 real 0m0.075s 00:05:29.963 user 0m0.053s 00:05:29.963 sys 0m0.021s 00:05:29.963 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.963 21:21:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:29.963 ************************************ 00:05:29.963 END TEST skip_rpc_with_delay 00:05:29.963 ************************************ 00:05:29.963 21:21:19 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:29.963 21:21:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:29.963 21:21:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:29.963 21:21:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:29.963 21:21:19 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.963 21:21:19 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.963 21:21:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.963 ************************************ 00:05:29.963 START TEST exit_on_failed_rpc_init 00:05:29.963 ************************************ 00:05:29.963 21:21:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:29.963 21:21:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1961455 00:05:29.963 21:21:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1961455 00:05:29.963 21:21:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1961455 ']' 00:05:29.963 21:21:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.963 21:21:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.963 21:21:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.963 21:21:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.963 21:21:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.963 21:21:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.963 [2024-07-15 21:21:19.669817] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:29.963 [2024-07-15 21:21:19.669875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1961455 ] 00:05:29.963 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.963 [2024-07-15 21:21:19.733982] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.224 [2024-07-15 21:21:19.812352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.795 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.795 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:30.795 21:21:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.795 21:21:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.795 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:30.795 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.795 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.795 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.795 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.795 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.795 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.795 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.795 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.795 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:30.795 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.795 [2024-07-15 21:21:20.470908] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:30.795 [2024-07-15 21:21:20.470957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1961659 ] 00:05:30.795 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.795 [2024-07-15 21:21:20.544960] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.056 [2024-07-15 21:21:20.608788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.056 [2024-07-15 21:21:20.608842] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:31.056 [2024-07-15 21:21:20.608852] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:31.056 [2024-07-15 21:21:20.608858] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1961455 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1961455 ']' 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1961455 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1961455 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1961455' 00:05:31.056 killing process with pid 1961455 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1961455 00:05:31.056 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1961455 00:05:31.317 00:05:31.317 real 0m1.320s 00:05:31.317 user 0m1.530s 00:05:31.317 sys 0m0.369s 00:05:31.317 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.317 21:21:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.318 ************************************ 00:05:31.318 END TEST exit_on_failed_rpc_init 00:05:31.318 ************************************ 00:05:31.318 21:21:20 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:31.318 21:21:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:31.318 00:05:31.318 real 0m13.625s 00:05:31.318 user 0m13.247s 00:05:31.318 sys 0m1.420s 00:05:31.318 21:21:20 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.318 21:21:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.318 ************************************ 00:05:31.318 END TEST skip_rpc 00:05:31.318 ************************************ 00:05:31.318 21:21:21 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.318 21:21:21 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.318 21:21:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.318 21:21:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.318 21:21:21 -- common/autotest_common.sh@10 -- # set +x 00:05:31.318 ************************************ 00:05:31.318 START TEST rpc_client 00:05:31.318 ************************************ 00:05:31.318 21:21:21 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.579 * Looking for test storage... 00:05:31.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:31.579 21:21:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:31.579 OK 00:05:31.579 21:21:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:31.579 00:05:31.579 real 0m0.128s 00:05:31.579 user 0m0.058s 00:05:31.579 sys 0m0.078s 00:05:31.579 21:21:21 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.580 21:21:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:31.580 ************************************ 00:05:31.580 END TEST rpc_client 00:05:31.580 ************************************ 00:05:31.580 21:21:21 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.580 21:21:21 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:31.580 21:21:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.580 21:21:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.580 21:21:21 -- common/autotest_common.sh@10 -- # set +x 00:05:31.580 ************************************ 00:05:31.580 START TEST json_config 00:05:31.580 ************************************ 00:05:31.580 21:21:21 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.580 21:21:21 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.580 21:21:21 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.580 21:21:21 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.580 21:21:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.580 21:21:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.580 21:21:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.580 21:21:21 json_config -- paths/export.sh@5 -- # export PATH 00:05:31.580 21:21:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@47 -- # : 0 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:31.580 21:21:21 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:31.580 INFO: JSON configuration test init 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:31.580 21:21:21 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.580 21:21:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:31.580 21:21:21 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.580 21:21:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.580 21:21:21 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:31.580 21:21:21 json_config -- json_config/common.sh@9 -- # local app=target 00:05:31.580 21:21:21 json_config -- json_config/common.sh@10 -- # shift 00:05:31.580 21:21:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.580 21:21:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.580 21:21:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.580 21:21:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.580 21:21:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.580 21:21:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1961939 00:05:31.580 21:21:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.580 Waiting for target to run... 00:05:31.580 21:21:21 json_config -- json_config/common.sh@25 -- # waitforlisten 1961939 /var/tmp/spdk_tgt.sock 00:05:31.580 21:21:21 json_config -- common/autotest_common.sh@829 -- # '[' -z 1961939 ']' 00:05:31.580 21:21:21 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.580 21:21:21 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.580 21:21:21 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:31.580 21:21:21 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.580 21:21:21 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.580 21:21:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.842 [2024-07-15 21:21:21.430207] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:31.842 [2024-07-15 21:21:21.430282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1961939 ] 00:05:31.842 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.102 [2024-07-15 21:21:21.737668] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.103 [2024-07-15 21:21:21.787743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.712 21:21:22 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.712 21:21:22 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:32.712 21:21:22 json_config -- json_config/common.sh@26 -- # echo '' 00:05:32.712 00:05:32.712 21:21:22 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:32.712 21:21:22 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:32.712 21:21:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.712 21:21:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.712 21:21:22 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:32.712 21:21:22 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:32.712 21:21:22 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.712 21:21:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.712 21:21:22 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:32.712 21:21:22 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:32.712 21:21:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:32.972 21:21:22 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:32.972 21:21:22 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:32.972 21:21:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.972 21:21:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.972 21:21:22 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:32.972 21:21:22 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:32.972 21:21:22 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:33.233 21:21:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:33.233 21:21:22 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.233 21:21:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:33.233 21:21:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.233 21:21:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:33.233 21:21:22 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.233 21:21:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.495 MallocForNvmf0 00:05:33.495 21:21:23 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.495 21:21:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.495 MallocForNvmf1 00:05:33.756 21:21:23 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.756 21:21:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.756 [2024-07-15 21:21:23.427186] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.756 21:21:23 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.756 21:21:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.016 21:21:23 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.016 21:21:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.016 21:21:23 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.016 21:21:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.276 21:21:23 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.276 21:21:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.276 [2024-07-15 21:21:24.037294] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:34.276 21:21:24 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:34.276 21:21:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.276 21:21:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.536 21:21:24 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:34.536 21:21:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.536 21:21:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.536 21:21:24 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:34.536 21:21:24 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.536 21:21:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.536 MallocBdevForConfigChangeCheck 00:05:34.536 21:21:24 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:34.536 21:21:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.536 21:21:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.536 21:21:24 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:34.536 21:21:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.107 21:21:24 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:35.107 INFO: shutting down applications... 00:05:35.107 21:21:24 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:35.107 21:21:24 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:35.107 21:21:24 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:35.107 21:21:24 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:35.367 Calling clear_iscsi_subsystem 00:05:35.367 Calling clear_nvmf_subsystem 00:05:35.367 Calling clear_nbd_subsystem 00:05:35.367 Calling clear_ublk_subsystem 00:05:35.367 Calling clear_vhost_blk_subsystem 00:05:35.367 Calling clear_vhost_scsi_subsystem 00:05:35.367 Calling clear_bdev_subsystem 00:05:35.367 21:21:25 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:35.367 21:21:25 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:35.367 21:21:25 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:35.367 21:21:25 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:35.367 21:21:25 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.367 21:21:25 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:35.629 21:21:25 json_config -- json_config/json_config.sh@345 -- # break 00:05:35.629 21:21:25 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:35.629 21:21:25 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:35.629 21:21:25 json_config -- json_config/common.sh@31 -- # local app=target 00:05:35.629 21:21:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.629 21:21:25 json_config -- json_config/common.sh@35 -- # [[ -n 1961939 ]] 00:05:35.629 21:21:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1961939 00:05:35.629 21:21:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.629 21:21:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.629 21:21:25 json_config -- json_config/common.sh@41 -- # kill -0 1961939 00:05:35.629 21:21:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.200 21:21:25 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.201 21:21:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.201 21:21:25 json_config -- json_config/common.sh@41 -- # kill -0 1961939 00:05:36.201 21:21:25 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:36.201 21:21:25 json_config -- json_config/common.sh@43 -- # break 00:05:36.201 21:21:25 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:36.201 21:21:25 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:36.201 SPDK target shutdown done 00:05:36.201 21:21:25 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:36.201 INFO: relaunching applications... 00:05:36.201 21:21:25 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.201 21:21:25 json_config -- json_config/common.sh@9 -- # local app=target 00:05:36.201 21:21:25 json_config -- json_config/common.sh@10 -- # shift 00:05:36.201 21:21:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.201 21:21:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.201 21:21:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.201 21:21:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.201 21:21:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.201 21:21:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1962907 00:05:36.201 21:21:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.201 Waiting for target to run... 00:05:36.201 21:21:25 json_config -- json_config/common.sh@25 -- # waitforlisten 1962907 /var/tmp/spdk_tgt.sock 00:05:36.201 21:21:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.201 21:21:25 json_config -- common/autotest_common.sh@829 -- # '[' -z 1962907 ']' 00:05:36.201 21:21:25 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.201 21:21:25 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.201 21:21:25 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.201 21:21:25 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.201 21:21:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.201 [2024-07-15 21:21:25.939016] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:36.201 [2024-07-15 21:21:25.939074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1962907 ] 00:05:36.201 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.462 [2024-07-15 21:21:26.216131] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.722 [2024-07-15 21:21:26.269321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.983 [2024-07-15 21:21:26.761446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.244 [2024-07-15 21:21:26.793792] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:37.244 21:21:26 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.244 21:21:26 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:37.244 21:21:26 json_config -- json_config/common.sh@26 -- # echo '' 00:05:37.244 00:05:37.244 21:21:26 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:37.244 21:21:26 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:37.244 INFO: Checking if target configuration is the same... 00:05:37.244 21:21:26 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:37.244 21:21:26 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.244 21:21:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.244 + '[' 2 -ne 2 ']' 00:05:37.244 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:37.244 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:37.244 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:37.244 +++ basename /dev/fd/62 00:05:37.244 ++ mktemp /tmp/62.XXX 00:05:37.244 + tmp_file_1=/tmp/62.QOk 00:05:37.244 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.244 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:37.244 + tmp_file_2=/tmp/spdk_tgt_config.json.fwH 00:05:37.244 + ret=0 00:05:37.244 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:37.504 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:37.504 + diff -u /tmp/62.QOk /tmp/spdk_tgt_config.json.fwH 00:05:37.504 + echo 'INFO: JSON config files are the same' 00:05:37.504 INFO: JSON config files are the same 00:05:37.504 + rm /tmp/62.QOk /tmp/spdk_tgt_config.json.fwH 00:05:37.504 + exit 0 00:05:37.504 21:21:27 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:37.504 21:21:27 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:37.504 INFO: changing configuration and checking if this can be detected... 00:05:37.504 21:21:27 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:37.504 21:21:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:37.764 21:21:27 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:37.764 21:21:27 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.764 21:21:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.764 + '[' 2 -ne 2 ']' 00:05:37.764 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:37.764 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:37.764 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:37.764 +++ basename /dev/fd/62 00:05:37.764 ++ mktemp /tmp/62.XXX 00:05:37.764 + tmp_file_1=/tmp/62.nJx 00:05:37.764 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.764 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:37.764 + tmp_file_2=/tmp/spdk_tgt_config.json.cDW 00:05:37.764 + ret=0 00:05:37.764 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.025 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.025 + diff -u /tmp/62.nJx /tmp/spdk_tgt_config.json.cDW 00:05:38.025 + ret=1 00:05:38.025 + echo '=== Start of file: /tmp/62.nJx ===' 00:05:38.025 + cat /tmp/62.nJx 00:05:38.025 + echo '=== End of file: /tmp/62.nJx ===' 00:05:38.025 + echo '' 00:05:38.025 + echo '=== Start of file: /tmp/spdk_tgt_config.json.cDW ===' 00:05:38.025 + cat /tmp/spdk_tgt_config.json.cDW 00:05:38.025 + echo '=== End of file: /tmp/spdk_tgt_config.json.cDW ===' 00:05:38.025 + echo '' 00:05:38.025 + rm /tmp/62.nJx /tmp/spdk_tgt_config.json.cDW 00:05:38.025 + exit 1 00:05:38.025 21:21:27 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:38.025 INFO: configuration change detected. 00:05:38.025 21:21:27 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:38.025 21:21:27 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:38.025 21:21:27 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.025 21:21:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.025 21:21:27 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:38.025 21:21:27 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:38.025 21:21:27 json_config -- json_config/json_config.sh@317 -- # [[ -n 1962907 ]] 00:05:38.025 21:21:27 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:38.025 21:21:27 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:38.025 21:21:27 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.025 21:21:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.025 21:21:27 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:38.025 21:21:27 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:38.025 21:21:27 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:38.025 21:21:27 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:38.025 21:21:27 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:38.025 21:21:27 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:38.025 21:21:27 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.025 21:21:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.025 21:21:27 json_config -- json_config/json_config.sh@323 -- # killprocess 1962907 00:05:38.025 21:21:27 json_config -- common/autotest_common.sh@948 -- # '[' -z 1962907 ']' 00:05:38.025 21:21:27 json_config -- common/autotest_common.sh@952 -- # kill -0 1962907 00:05:38.025 21:21:27 json_config -- common/autotest_common.sh@953 -- # uname 00:05:38.025 21:21:27 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.025 21:21:27 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1962907 00:05:38.025 21:21:27 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.025 21:21:27 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.025 21:21:27 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1962907' 00:05:38.025 killing process with pid 1962907 00:05:38.025 21:21:27 json_config -- common/autotest_common.sh@967 -- # kill 1962907 00:05:38.025 21:21:27 json_config -- common/autotest_common.sh@972 -- # wait 1962907 00:05:38.596 21:21:28 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.596 21:21:28 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:38.596 21:21:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.596 21:21:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.596 21:21:28 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:38.596 21:21:28 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:38.596 INFO: Success 00:05:38.596 00:05:38.596 real 0m6.894s 00:05:38.596 user 0m8.312s 00:05:38.596 sys 0m1.695s 00:05:38.596 21:21:28 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.596 21:21:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.596 ************************************ 00:05:38.596 END TEST json_config 00:05:38.596 ************************************ 00:05:38.596 21:21:28 -- common/autotest_common.sh@1142 -- # return 0 00:05:38.596 21:21:28 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:38.596 21:21:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.596 21:21:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.596 21:21:28 -- common/autotest_common.sh@10 -- # set +x 00:05:38.596 ************************************ 00:05:38.596 START TEST json_config_extra_key 00:05:38.596 ************************************ 00:05:38.596 21:21:28 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:38.596 21:21:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:38.596 21:21:28 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:38.596 21:21:28 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.596 21:21:28 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.596 21:21:28 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.597 21:21:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.597 21:21:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.597 21:21:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.597 21:21:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:38.597 21:21:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.597 21:21:28 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:38.597 21:21:28 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:38.597 21:21:28 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:38.597 21:21:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:38.597 21:21:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.597 21:21:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.597 21:21:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:38.597 21:21:28 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:38.597 21:21:28 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:38.597 21:21:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:38.597 21:21:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:38.597 21:21:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:38.597 21:21:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:38.597 21:21:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:38.597 21:21:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:38.597 21:21:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:38.597 21:21:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:38.597 21:21:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:38.597 21:21:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:38.597 21:21:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:38.597 INFO: launching applications... 00:05:38.597 21:21:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:38.597 21:21:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:38.597 21:21:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:38.597 21:21:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:38.597 21:21:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:38.597 21:21:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:38.597 21:21:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.597 21:21:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.597 21:21:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1963604 00:05:38.597 21:21:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:38.597 Waiting for target to run... 00:05:38.597 21:21:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1963604 /var/tmp/spdk_tgt.sock 00:05:38.597 21:21:28 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1963604 ']' 00:05:38.597 21:21:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:38.597 21:21:28 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:38.597 21:21:28 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.597 21:21:28 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:38.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:38.597 21:21:28 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.597 21:21:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:38.597 [2024-07-15 21:21:28.383394] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:38.597 [2024-07-15 21:21:28.383468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1963604 ] 00:05:38.857 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.117 [2024-07-15 21:21:28.684364] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.117 [2024-07-15 21:21:28.742243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.376 21:21:29 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.376 21:21:29 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:39.376 21:21:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:39.376 00:05:39.376 21:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:39.376 INFO: shutting down applications... 00:05:39.376 21:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:39.376 21:21:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:39.376 21:21:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:39.376 21:21:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1963604 ]] 00:05:39.376 21:21:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1963604 00:05:39.376 21:21:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:39.376 21:21:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.376 21:21:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1963604 00:05:39.376 21:21:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.945 21:21:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.945 21:21:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.945 21:21:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1963604 00:05:39.945 21:21:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:39.945 21:21:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:39.945 21:21:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:39.945 21:21:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:39.945 SPDK target shutdown done 00:05:39.945 21:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:39.945 Success 00:05:39.945 00:05:39.945 real 0m1.435s 00:05:39.945 user 0m1.053s 00:05:39.945 sys 0m0.401s 00:05:39.945 21:21:29 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.945 21:21:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:39.945 ************************************ 00:05:39.945 END TEST json_config_extra_key 00:05:39.945 ************************************ 00:05:39.945 21:21:29 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.945 21:21:29 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.945 21:21:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.945 21:21:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.945 21:21:29 -- common/autotest_common.sh@10 -- # set +x 00:05:39.945 ************************************ 00:05:39.945 START TEST alias_rpc 00:05:39.945 ************************************ 00:05:39.945 21:21:29 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.205 * Looking for test storage... 00:05:40.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:40.205 21:21:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:40.205 21:21:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1963871 00:05:40.205 21:21:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1963871 00:05:40.205 21:21:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.205 21:21:29 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1963871 ']' 00:05:40.205 21:21:29 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.205 21:21:29 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.205 21:21:29 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.205 21:21:29 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.205 21:21:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.205 [2024-07-15 21:21:29.892976] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:40.205 [2024-07-15 21:21:29.893046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1963871 ] 00:05:40.205 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.205 [2024-07-15 21:21:29.957689] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.465 [2024-07-15 21:21:30.034634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.036 21:21:30 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.036 21:21:30 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:41.036 21:21:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:41.036 21:21:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1963871 00:05:41.036 21:21:30 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1963871 ']' 00:05:41.036 21:21:30 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1963871 00:05:41.036 21:21:30 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:41.036 21:21:30 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.296 21:21:30 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1963871 00:05:41.296 21:21:30 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.296 21:21:30 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.296 21:21:30 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1963871' 00:05:41.296 killing process with pid 1963871 00:05:41.296 21:21:30 alias_rpc -- common/autotest_common.sh@967 -- # kill 1963871 00:05:41.296 21:21:30 alias_rpc -- common/autotest_common.sh@972 -- # wait 1963871 00:05:41.296 00:05:41.296 real 0m1.369s 00:05:41.296 user 0m1.518s 00:05:41.296 sys 0m0.355s 00:05:41.296 21:21:31 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.556 21:21:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.556 ************************************ 00:05:41.556 END TEST alias_rpc 00:05:41.556 ************************************ 00:05:41.556 21:21:31 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.556 21:21:31 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:41.556 21:21:31 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:41.556 21:21:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.556 21:21:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.556 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:05:41.556 ************************************ 00:05:41.556 START TEST spdkcli_tcp 00:05:41.556 ************************************ 00:05:41.556 21:21:31 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:41.556 * Looking for test storage... 00:05:41.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:41.556 21:21:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:41.557 21:21:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:41.557 21:21:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:41.557 21:21:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:41.557 21:21:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:41.557 21:21:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:41.557 21:21:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:41.557 21:21:31 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.557 21:21:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.557 21:21:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1964153 00:05:41.557 21:21:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1964153 00:05:41.557 21:21:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:41.557 21:21:31 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1964153 ']' 00:05:41.557 21:21:31 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.557 21:21:31 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.557 21:21:31 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.557 21:21:31 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.557 21:21:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.557 [2024-07-15 21:21:31.347855] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:41.557 [2024-07-15 21:21:31.347918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1964153 ] 00:05:41.817 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.817 [2024-07-15 21:21:31.407243] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.817 [2024-07-15 21:21:31.473984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.817 [2024-07-15 21:21:31.473986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.387 21:21:32 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.387 21:21:32 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:42.388 21:21:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1964472 00:05:42.388 21:21:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:42.388 21:21:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:42.649 [ 00:05:42.649 "bdev_malloc_delete", 00:05:42.649 "bdev_malloc_create", 00:05:42.649 "bdev_null_resize", 00:05:42.649 "bdev_null_delete", 00:05:42.649 "bdev_null_create", 00:05:42.649 "bdev_nvme_cuse_unregister", 00:05:42.649 "bdev_nvme_cuse_register", 00:05:42.649 "bdev_opal_new_user", 00:05:42.649 "bdev_opal_set_lock_state", 00:05:42.649 "bdev_opal_delete", 00:05:42.649 "bdev_opal_get_info", 00:05:42.649 "bdev_opal_create", 00:05:42.649 "bdev_nvme_opal_revert", 00:05:42.649 "bdev_nvme_opal_init", 00:05:42.649 "bdev_nvme_send_cmd", 00:05:42.649 "bdev_nvme_get_path_iostat", 00:05:42.649 "bdev_nvme_get_mdns_discovery_info", 00:05:42.649 "bdev_nvme_stop_mdns_discovery", 00:05:42.649 "bdev_nvme_start_mdns_discovery", 00:05:42.649 "bdev_nvme_set_multipath_policy", 00:05:42.649 "bdev_nvme_set_preferred_path", 00:05:42.649 "bdev_nvme_get_io_paths", 00:05:42.649 "bdev_nvme_remove_error_injection", 00:05:42.649 "bdev_nvme_add_error_injection", 00:05:42.649 "bdev_nvme_get_discovery_info", 00:05:42.649 "bdev_nvme_stop_discovery", 00:05:42.649 "bdev_nvme_start_discovery", 00:05:42.649 "bdev_nvme_get_controller_health_info", 00:05:42.649 "bdev_nvme_disable_controller", 00:05:42.649 "bdev_nvme_enable_controller", 00:05:42.649 "bdev_nvme_reset_controller", 00:05:42.649 "bdev_nvme_get_transport_statistics", 00:05:42.649 "bdev_nvme_apply_firmware", 00:05:42.649 "bdev_nvme_detach_controller", 00:05:42.649 "bdev_nvme_get_controllers", 00:05:42.649 "bdev_nvme_attach_controller", 00:05:42.649 "bdev_nvme_set_hotplug", 00:05:42.649 "bdev_nvme_set_options", 00:05:42.649 "bdev_passthru_delete", 00:05:42.649 "bdev_passthru_create", 00:05:42.649 "bdev_lvol_set_parent_bdev", 00:05:42.649 "bdev_lvol_set_parent", 00:05:42.649 "bdev_lvol_check_shallow_copy", 00:05:42.649 "bdev_lvol_start_shallow_copy", 00:05:42.649 "bdev_lvol_grow_lvstore", 00:05:42.649 "bdev_lvol_get_lvols", 00:05:42.649 "bdev_lvol_get_lvstores", 00:05:42.649 "bdev_lvol_delete", 00:05:42.649 "bdev_lvol_set_read_only", 00:05:42.649 "bdev_lvol_resize", 00:05:42.649 "bdev_lvol_decouple_parent", 00:05:42.649 "bdev_lvol_inflate", 00:05:42.649 "bdev_lvol_rename", 00:05:42.649 "bdev_lvol_clone_bdev", 00:05:42.649 "bdev_lvol_clone", 00:05:42.649 "bdev_lvol_snapshot", 00:05:42.649 "bdev_lvol_create", 00:05:42.649 "bdev_lvol_delete_lvstore", 00:05:42.649 "bdev_lvol_rename_lvstore", 00:05:42.649 "bdev_lvol_create_lvstore", 00:05:42.649 "bdev_raid_set_options", 00:05:42.649 "bdev_raid_remove_base_bdev", 00:05:42.649 "bdev_raid_add_base_bdev", 00:05:42.649 "bdev_raid_delete", 00:05:42.649 "bdev_raid_create", 00:05:42.649 "bdev_raid_get_bdevs", 00:05:42.649 "bdev_error_inject_error", 00:05:42.649 "bdev_error_delete", 00:05:42.649 "bdev_error_create", 00:05:42.649 "bdev_split_delete", 00:05:42.649 "bdev_split_create", 00:05:42.649 "bdev_delay_delete", 00:05:42.649 "bdev_delay_create", 00:05:42.649 "bdev_delay_update_latency", 00:05:42.649 "bdev_zone_block_delete", 00:05:42.649 "bdev_zone_block_create", 00:05:42.649 "blobfs_create", 00:05:42.649 "blobfs_detect", 00:05:42.649 "blobfs_set_cache_size", 00:05:42.649 "bdev_aio_delete", 00:05:42.649 "bdev_aio_rescan", 00:05:42.649 "bdev_aio_create", 00:05:42.649 "bdev_ftl_set_property", 00:05:42.649 "bdev_ftl_get_properties", 00:05:42.649 "bdev_ftl_get_stats", 00:05:42.649 "bdev_ftl_unmap", 00:05:42.649 "bdev_ftl_unload", 00:05:42.649 "bdev_ftl_delete", 00:05:42.649 "bdev_ftl_load", 00:05:42.649 "bdev_ftl_create", 00:05:42.649 "bdev_virtio_attach_controller", 00:05:42.649 "bdev_virtio_scsi_get_devices", 00:05:42.649 "bdev_virtio_detach_controller", 00:05:42.649 "bdev_virtio_blk_set_hotplug", 00:05:42.649 "bdev_iscsi_delete", 00:05:42.649 "bdev_iscsi_create", 00:05:42.649 "bdev_iscsi_set_options", 00:05:42.649 "accel_error_inject_error", 00:05:42.649 "ioat_scan_accel_module", 00:05:42.649 "dsa_scan_accel_module", 00:05:42.649 "iaa_scan_accel_module", 00:05:42.649 "vfu_virtio_create_scsi_endpoint", 00:05:42.649 "vfu_virtio_scsi_remove_target", 00:05:42.649 "vfu_virtio_scsi_add_target", 00:05:42.649 "vfu_virtio_create_blk_endpoint", 00:05:42.649 "vfu_virtio_delete_endpoint", 00:05:42.649 "keyring_file_remove_key", 00:05:42.649 "keyring_file_add_key", 00:05:42.649 "keyring_linux_set_options", 00:05:42.649 "iscsi_get_histogram", 00:05:42.649 "iscsi_enable_histogram", 00:05:42.649 "iscsi_set_options", 00:05:42.649 "iscsi_get_auth_groups", 00:05:42.649 "iscsi_auth_group_remove_secret", 00:05:42.649 "iscsi_auth_group_add_secret", 00:05:42.649 "iscsi_delete_auth_group", 00:05:42.649 "iscsi_create_auth_group", 00:05:42.649 "iscsi_set_discovery_auth", 00:05:42.649 "iscsi_get_options", 00:05:42.649 "iscsi_target_node_request_logout", 00:05:42.649 "iscsi_target_node_set_redirect", 00:05:42.649 "iscsi_target_node_set_auth", 00:05:42.649 "iscsi_target_node_add_lun", 00:05:42.649 "iscsi_get_stats", 00:05:42.649 "iscsi_get_connections", 00:05:42.649 "iscsi_portal_group_set_auth", 00:05:42.649 "iscsi_start_portal_group", 00:05:42.649 "iscsi_delete_portal_group", 00:05:42.649 "iscsi_create_portal_group", 00:05:42.649 "iscsi_get_portal_groups", 00:05:42.649 "iscsi_delete_target_node", 00:05:42.649 "iscsi_target_node_remove_pg_ig_maps", 00:05:42.649 "iscsi_target_node_add_pg_ig_maps", 00:05:42.649 "iscsi_create_target_node", 00:05:42.649 "iscsi_get_target_nodes", 00:05:42.649 "iscsi_delete_initiator_group", 00:05:42.649 "iscsi_initiator_group_remove_initiators", 00:05:42.649 "iscsi_initiator_group_add_initiators", 00:05:42.649 "iscsi_create_initiator_group", 00:05:42.649 "iscsi_get_initiator_groups", 00:05:42.649 "nvmf_set_crdt", 00:05:42.649 "nvmf_set_config", 00:05:42.649 "nvmf_set_max_subsystems", 00:05:42.649 "nvmf_stop_mdns_prr", 00:05:42.649 "nvmf_publish_mdns_prr", 00:05:42.649 "nvmf_subsystem_get_listeners", 00:05:42.649 "nvmf_subsystem_get_qpairs", 00:05:42.649 "nvmf_subsystem_get_controllers", 00:05:42.649 "nvmf_get_stats", 00:05:42.649 "nvmf_get_transports", 00:05:42.649 "nvmf_create_transport", 00:05:42.649 "nvmf_get_targets", 00:05:42.649 "nvmf_delete_target", 00:05:42.649 "nvmf_create_target", 00:05:42.649 "nvmf_subsystem_allow_any_host", 00:05:42.649 "nvmf_subsystem_remove_host", 00:05:42.649 "nvmf_subsystem_add_host", 00:05:42.649 "nvmf_ns_remove_host", 00:05:42.649 "nvmf_ns_add_host", 00:05:42.649 "nvmf_subsystem_remove_ns", 00:05:42.649 "nvmf_subsystem_add_ns", 00:05:42.649 "nvmf_subsystem_listener_set_ana_state", 00:05:42.649 "nvmf_discovery_get_referrals", 00:05:42.649 "nvmf_discovery_remove_referral", 00:05:42.649 "nvmf_discovery_add_referral", 00:05:42.649 "nvmf_subsystem_remove_listener", 00:05:42.649 "nvmf_subsystem_add_listener", 00:05:42.649 "nvmf_delete_subsystem", 00:05:42.649 "nvmf_create_subsystem", 00:05:42.649 "nvmf_get_subsystems", 00:05:42.649 "env_dpdk_get_mem_stats", 00:05:42.649 "nbd_get_disks", 00:05:42.649 "nbd_stop_disk", 00:05:42.649 "nbd_start_disk", 00:05:42.649 "ublk_recover_disk", 00:05:42.649 "ublk_get_disks", 00:05:42.649 "ublk_stop_disk", 00:05:42.649 "ublk_start_disk", 00:05:42.649 "ublk_destroy_target", 00:05:42.649 "ublk_create_target", 00:05:42.649 "virtio_blk_create_transport", 00:05:42.649 "virtio_blk_get_transports", 00:05:42.650 "vhost_controller_set_coalescing", 00:05:42.650 "vhost_get_controllers", 00:05:42.650 "vhost_delete_controller", 00:05:42.650 "vhost_create_blk_controller", 00:05:42.650 "vhost_scsi_controller_remove_target", 00:05:42.650 "vhost_scsi_controller_add_target", 00:05:42.650 "vhost_start_scsi_controller", 00:05:42.650 "vhost_create_scsi_controller", 00:05:42.650 "thread_set_cpumask", 00:05:42.650 "framework_get_governor", 00:05:42.650 "framework_get_scheduler", 00:05:42.650 "framework_set_scheduler", 00:05:42.650 "framework_get_reactors", 00:05:42.650 "thread_get_io_channels", 00:05:42.650 "thread_get_pollers", 00:05:42.650 "thread_get_stats", 00:05:42.650 "framework_monitor_context_switch", 00:05:42.650 "spdk_kill_instance", 00:05:42.650 "log_enable_timestamps", 00:05:42.650 "log_get_flags", 00:05:42.650 "log_clear_flag", 00:05:42.650 "log_set_flag", 00:05:42.650 "log_get_level", 00:05:42.650 "log_set_level", 00:05:42.650 "log_get_print_level", 00:05:42.650 "log_set_print_level", 00:05:42.650 "framework_enable_cpumask_locks", 00:05:42.650 "framework_disable_cpumask_locks", 00:05:42.650 "framework_wait_init", 00:05:42.650 "framework_start_init", 00:05:42.650 "scsi_get_devices", 00:05:42.650 "bdev_get_histogram", 00:05:42.650 "bdev_enable_histogram", 00:05:42.650 "bdev_set_qos_limit", 00:05:42.650 "bdev_set_qd_sampling_period", 00:05:42.650 "bdev_get_bdevs", 00:05:42.650 "bdev_reset_iostat", 00:05:42.650 "bdev_get_iostat", 00:05:42.650 "bdev_examine", 00:05:42.650 "bdev_wait_for_examine", 00:05:42.650 "bdev_set_options", 00:05:42.650 "notify_get_notifications", 00:05:42.650 "notify_get_types", 00:05:42.650 "accel_get_stats", 00:05:42.650 "accel_set_options", 00:05:42.650 "accel_set_driver", 00:05:42.650 "accel_crypto_key_destroy", 00:05:42.650 "accel_crypto_keys_get", 00:05:42.650 "accel_crypto_key_create", 00:05:42.650 "accel_assign_opc", 00:05:42.650 "accel_get_module_info", 00:05:42.650 "accel_get_opc_assignments", 00:05:42.650 "vmd_rescan", 00:05:42.650 "vmd_remove_device", 00:05:42.650 "vmd_enable", 00:05:42.650 "sock_get_default_impl", 00:05:42.650 "sock_set_default_impl", 00:05:42.650 "sock_impl_set_options", 00:05:42.650 "sock_impl_get_options", 00:05:42.650 "iobuf_get_stats", 00:05:42.650 "iobuf_set_options", 00:05:42.650 "keyring_get_keys", 00:05:42.650 "framework_get_pci_devices", 00:05:42.650 "framework_get_config", 00:05:42.650 "framework_get_subsystems", 00:05:42.650 "vfu_tgt_set_base_path", 00:05:42.650 "trace_get_info", 00:05:42.650 "trace_get_tpoint_group_mask", 00:05:42.650 "trace_disable_tpoint_group", 00:05:42.650 "trace_enable_tpoint_group", 00:05:42.650 "trace_clear_tpoint_mask", 00:05:42.650 "trace_set_tpoint_mask", 00:05:42.650 "spdk_get_version", 00:05:42.650 "rpc_get_methods" 00:05:42.650 ] 00:05:42.650 21:21:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:42.650 21:21:32 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.650 21:21:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.650 21:21:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:42.650 21:21:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1964153 00:05:42.650 21:21:32 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1964153 ']' 00:05:42.650 21:21:32 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1964153 00:05:42.650 21:21:32 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:42.650 21:21:32 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.650 21:21:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1964153 00:05:42.650 21:21:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.650 21:21:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.650 21:21:32 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1964153' 00:05:42.650 killing process with pid 1964153 00:05:42.650 21:21:32 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1964153 00:05:42.650 21:21:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1964153 00:05:42.909 00:05:42.909 real 0m1.387s 00:05:42.909 user 0m2.549s 00:05:42.909 sys 0m0.410s 00:05:42.909 21:21:32 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.909 21:21:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.909 ************************************ 00:05:42.909 END TEST spdkcli_tcp 00:05:42.909 ************************************ 00:05:42.909 21:21:32 -- common/autotest_common.sh@1142 -- # return 0 00:05:42.909 21:21:32 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:42.909 21:21:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.909 21:21:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.909 21:21:32 -- common/autotest_common.sh@10 -- # set +x 00:05:42.909 ************************************ 00:05:42.909 START TEST dpdk_mem_utility 00:05:42.909 ************************************ 00:05:42.909 21:21:32 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.191 * Looking for test storage... 00:05:43.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:43.191 21:21:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:43.191 21:21:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1964540 00:05:43.191 21:21:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1964540 00:05:43.191 21:21:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.191 21:21:32 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1964540 ']' 00:05:43.191 21:21:32 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.191 21:21:32 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.191 21:21:32 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.191 21:21:32 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.191 21:21:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:43.191 [2024-07-15 21:21:32.781736] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:43.191 [2024-07-15 21:21:32.781792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1964540 ] 00:05:43.191 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.191 [2024-07-15 21:21:32.842566] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.191 [2024-07-15 21:21:32.909968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.760 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.760 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:43.760 21:21:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:43.760 21:21:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:43.760 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.760 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:43.760 { 00:05:43.760 "filename": "/tmp/spdk_mem_dump.txt" 00:05:43.760 } 00:05:43.760 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.760 21:21:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:44.022 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:44.022 1 heaps totaling size 814.000000 MiB 00:05:44.022 size: 814.000000 MiB heap id: 0 00:05:44.022 end heaps---------- 00:05:44.022 8 mempools totaling size 598.116089 MiB 00:05:44.022 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:44.022 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:44.022 size: 84.521057 MiB name: bdev_io_1964540 00:05:44.022 size: 51.011292 MiB name: evtpool_1964540 00:05:44.022 size: 50.003479 MiB name: msgpool_1964540 00:05:44.022 size: 21.763794 MiB name: PDU_Pool 00:05:44.022 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:44.022 size: 0.026123 MiB name: Session_Pool 00:05:44.022 end mempools------- 00:05:44.022 6 memzones totaling size 4.142822 MiB 00:05:44.022 size: 1.000366 MiB name: RG_ring_0_1964540 00:05:44.022 size: 1.000366 MiB name: RG_ring_1_1964540 00:05:44.022 size: 1.000366 MiB name: RG_ring_4_1964540 00:05:44.022 size: 1.000366 MiB name: RG_ring_5_1964540 00:05:44.022 size: 0.125366 MiB name: RG_ring_2_1964540 00:05:44.022 size: 0.015991 MiB name: RG_ring_3_1964540 00:05:44.022 end memzones------- 00:05:44.022 21:21:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:44.022 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:44.022 list of free elements. size: 12.519348 MiB 00:05:44.022 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:44.022 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:44.022 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:44.022 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:44.022 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:44.022 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:44.022 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:44.022 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:44.022 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:44.022 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:44.022 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:44.022 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:44.022 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:44.022 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:44.022 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:44.022 list of standard malloc elements. size: 199.218079 MiB 00:05:44.022 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:44.022 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:44.022 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:44.022 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:44.022 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:44.022 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:44.022 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:44.022 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:44.022 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:44.022 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:44.022 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:44.022 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:44.022 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:44.022 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:44.022 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:44.022 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:44.022 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:44.022 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:44.022 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:44.022 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:44.022 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:44.022 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:44.022 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:44.022 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:44.022 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:44.022 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:44.022 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:44.022 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:44.022 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:44.022 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:44.022 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:44.022 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:44.022 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:44.022 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:44.022 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:44.022 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:44.022 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:44.022 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:44.022 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:44.022 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:44.022 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:44.022 list of memzone associated elements. size: 602.262573 MiB 00:05:44.022 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:44.022 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:44.022 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:44.022 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:44.022 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:44.022 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1964540_0 00:05:44.022 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:44.022 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1964540_0 00:05:44.022 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:44.022 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1964540_0 00:05:44.022 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:44.022 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:44.022 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:44.022 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:44.022 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:44.022 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1964540 00:05:44.022 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:44.022 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1964540 00:05:44.022 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:44.022 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1964540 00:05:44.022 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:44.022 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:44.022 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:44.022 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:44.022 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:44.022 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:44.022 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:44.022 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:44.022 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:44.022 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1964540 00:05:44.022 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:44.022 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1964540 00:05:44.022 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:44.022 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1964540 00:05:44.022 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:44.022 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1964540 00:05:44.022 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:44.022 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1964540 00:05:44.022 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:44.022 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:44.022 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:44.022 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:44.022 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:44.022 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:44.022 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:44.022 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1964540 00:05:44.022 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:44.022 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:44.022 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:44.022 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:44.023 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:44.023 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1964540 00:05:44.023 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:44.023 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:44.023 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:44.023 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1964540 00:05:44.023 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:44.023 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1964540 00:05:44.023 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:44.023 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:44.023 21:21:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:44.023 21:21:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1964540 00:05:44.023 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1964540 ']' 00:05:44.023 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1964540 00:05:44.023 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:44.023 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.023 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1964540 00:05:44.023 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.023 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.023 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1964540' 00:05:44.023 killing process with pid 1964540 00:05:44.023 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1964540 00:05:44.023 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1964540 00:05:44.283 00:05:44.283 real 0m1.282s 00:05:44.283 user 0m1.407s 00:05:44.283 sys 0m0.327s 00:05:44.283 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.283 21:21:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.283 ************************************ 00:05:44.283 END TEST dpdk_mem_utility 00:05:44.283 ************************************ 00:05:44.283 21:21:33 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.283 21:21:33 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:44.283 21:21:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.283 21:21:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.283 21:21:33 -- common/autotest_common.sh@10 -- # set +x 00:05:44.283 ************************************ 00:05:44.283 START TEST event 00:05:44.283 ************************************ 00:05:44.283 21:21:33 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:44.283 * Looking for test storage... 00:05:44.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:44.283 21:21:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:44.283 21:21:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:44.283 21:21:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.283 21:21:34 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:44.283 21:21:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.283 21:21:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.543 ************************************ 00:05:44.543 START TEST event_perf 00:05:44.543 ************************************ 00:05:44.543 21:21:34 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.543 Running I/O for 1 seconds...[2024-07-15 21:21:34.145249] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:44.543 [2024-07-15 21:21:34.145356] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1964927 ] 00:05:44.543 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.543 [2024-07-15 21:21:34.211663] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.543 [2024-07-15 21:21:34.288578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.543 [2024-07-15 21:21:34.288694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.543 [2024-07-15 21:21:34.288853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.543 Running I/O for 1 seconds...[2024-07-15 21:21:34.288853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.925 00:05:45.925 lcore 0: 172554 00:05:45.925 lcore 1: 172555 00:05:45.925 lcore 2: 172552 00:05:45.925 lcore 3: 172555 00:05:45.925 done. 00:05:45.925 00:05:45.925 real 0m1.219s 00:05:45.925 user 0m4.132s 00:05:45.925 sys 0m0.082s 00:05:45.925 21:21:35 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.925 21:21:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.925 ************************************ 00:05:45.925 END TEST event_perf 00:05:45.925 ************************************ 00:05:45.925 21:21:35 event -- common/autotest_common.sh@1142 -- # return 0 00:05:45.925 21:21:35 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:45.925 21:21:35 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:45.925 21:21:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.925 21:21:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.925 ************************************ 00:05:45.925 START TEST event_reactor 00:05:45.925 ************************************ 00:05:45.925 21:21:35 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:45.925 [2024-07-15 21:21:35.443289] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:45.925 [2024-07-15 21:21:35.443408] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1965286 ] 00:05:45.925 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.925 [2024-07-15 21:21:35.510551] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.925 [2024-07-15 21:21:35.575042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.866 test_start 00:05:46.866 oneshot 00:05:46.866 tick 100 00:05:46.866 tick 100 00:05:46.866 tick 250 00:05:46.866 tick 100 00:05:46.866 tick 100 00:05:46.866 tick 250 00:05:46.866 tick 100 00:05:46.866 tick 500 00:05:46.866 tick 100 00:05:46.866 tick 100 00:05:46.866 tick 250 00:05:46.866 tick 100 00:05:46.866 tick 100 00:05:46.866 test_end 00:05:46.866 00:05:46.866 real 0m1.207s 00:05:46.866 user 0m1.131s 00:05:46.866 sys 0m0.073s 00:05:46.866 21:21:36 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.866 21:21:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:46.866 ************************************ 00:05:46.866 END TEST event_reactor 00:05:46.866 ************************************ 00:05:46.866 21:21:36 event -- common/autotest_common.sh@1142 -- # return 0 00:05:46.866 21:21:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:46.866 21:21:36 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:46.866 21:21:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.866 21:21:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.127 ************************************ 00:05:47.127 START TEST event_reactor_perf 00:05:47.127 ************************************ 00:05:47.127 21:21:36 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.127 [2024-07-15 21:21:36.724621] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:47.127 [2024-07-15 21:21:36.724716] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1965510 ] 00:05:47.127 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.127 [2024-07-15 21:21:36.788144] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.127 [2024-07-15 21:21:36.855375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.539 test_start 00:05:48.539 test_end 00:05:48.539 Performance: 371680 events per second 00:05:48.539 00:05:48.539 real 0m1.206s 00:05:48.539 user 0m1.123s 00:05:48.539 sys 0m0.080s 00:05:48.539 21:21:37 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.539 21:21:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.539 ************************************ 00:05:48.539 END TEST event_reactor_perf 00:05:48.539 ************************************ 00:05:48.539 21:21:37 event -- common/autotest_common.sh@1142 -- # return 0 00:05:48.539 21:21:37 event -- event/event.sh@49 -- # uname -s 00:05:48.539 21:21:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:48.539 21:21:37 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:48.539 21:21:37 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.539 21:21:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.539 21:21:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.539 ************************************ 00:05:48.539 START TEST event_scheduler 00:05:48.539 ************************************ 00:05:48.539 21:21:37 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:48.539 * Looking for test storage... 00:05:48.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:48.539 21:21:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:48.539 21:21:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1965752 00:05:48.539 21:21:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.539 21:21:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:48.539 21:21:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1965752 00:05:48.539 21:21:38 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1965752 ']' 00:05:48.539 21:21:38 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.539 21:21:38 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.539 21:21:38 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.539 21:21:38 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.539 21:21:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.540 [2024-07-15 21:21:38.140428] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:48.540 [2024-07-15 21:21:38.140490] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1965752 ] 00:05:48.540 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.540 [2024-07-15 21:21:38.196179] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.540 [2024-07-15 21:21:38.258644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.540 [2024-07-15 21:21:38.258803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.540 [2024-07-15 21:21:38.258939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.540 [2024-07-15 21:21:38.258940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.479 21:21:38 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.479 21:21:38 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:49.479 21:21:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:49.479 21:21:38 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.479 21:21:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.479 [2024-07-15 21:21:38.925022] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:49.479 [2024-07-15 21:21:38.925038] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:49.479 [2024-07-15 21:21:38.925045] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:49.479 [2024-07-15 21:21:38.925049] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:49.479 [2024-07-15 21:21:38.925053] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:49.479 21:21:38 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.479 21:21:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:49.479 21:21:38 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.479 21:21:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.479 [2024-07-15 21:21:38.983356] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:49.479 21:21:38 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.479 21:21:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:49.479 21:21:38 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.479 21:21:38 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.479 21:21:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.479 ************************************ 00:05:49.479 START TEST scheduler_create_thread 00:05:49.479 ************************************ 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.479 2 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.479 3 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.479 4 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.479 5 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.479 6 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.479 7 00:05:49.479 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.480 21:21:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:49.480 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.480 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.480 8 00:05:49.480 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.480 21:21:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:49.480 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.480 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.480 9 00:05:49.480 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.480 21:21:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:49.480 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.480 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.049 10 00:05:50.049 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.049 21:21:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:50.049 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.049 21:21:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.430 21:21:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.430 21:21:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:51.430 21:21:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:51.430 21:21:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.430 21:21:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.021 21:21:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.021 21:21:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:52.021 21:21:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.021 21:21:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.963 21:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.963 21:21:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:52.963 21:21:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:52.963 21:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.963 21:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.533 21:21:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.533 00:05:53.533 real 0m4.224s 00:05:53.533 user 0m0.023s 00:05:53.533 sys 0m0.007s 00:05:53.533 21:21:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.533 21:21:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.533 ************************************ 00:05:53.533 END TEST scheduler_create_thread 00:05:53.533 ************************************ 00:05:53.533 21:21:43 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:53.533 21:21:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:53.533 21:21:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1965752 00:05:53.533 21:21:43 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1965752 ']' 00:05:53.533 21:21:43 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1965752 00:05:53.533 21:21:43 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:53.533 21:21:43 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.533 21:21:43 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1965752 00:05:53.533 21:21:43 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:53.533 21:21:43 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:53.533 21:21:43 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1965752' 00:05:53.533 killing process with pid 1965752 00:05:53.533 21:21:43 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1965752 00:05:53.533 21:21:43 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1965752 00:05:53.793 [2024-07-15 21:21:43.524674] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:54.054 00:05:54.054 real 0m5.705s 00:05:54.054 user 0m12.740s 00:05:54.054 sys 0m0.361s 00:05:54.054 21:21:43 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.054 21:21:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.054 ************************************ 00:05:54.054 END TEST event_scheduler 00:05:54.054 ************************************ 00:05:54.054 21:21:43 event -- common/autotest_common.sh@1142 -- # return 0 00:05:54.054 21:21:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:54.054 21:21:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:54.054 21:21:43 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.054 21:21:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.054 21:21:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.054 ************************************ 00:05:54.054 START TEST app_repeat 00:05:54.054 ************************************ 00:05:54.054 21:21:43 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:54.054 21:21:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.054 21:21:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.054 21:21:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:54.054 21:21:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.054 21:21:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:54.054 21:21:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:54.054 21:21:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:54.054 21:21:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1967088 00:05:54.054 21:21:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.054 21:21:43 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:54.054 21:21:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1967088' 00:05:54.054 Process app_repeat pid: 1967088 00:05:54.054 21:21:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.054 21:21:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:54.054 spdk_app_start Round 0 00:05:54.054 21:21:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1967088 /var/tmp/spdk-nbd.sock 00:05:54.054 21:21:43 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1967088 ']' 00:05:54.054 21:21:43 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.054 21:21:43 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.054 21:21:43 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.054 21:21:43 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.054 21:21:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.054 [2024-07-15 21:21:43.813625] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:05:54.054 [2024-07-15 21:21:43.813727] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967088 ] 00:05:54.054 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.315 [2024-07-15 21:21:43.877376] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.315 [2024-07-15 21:21:43.944671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.315 [2024-07-15 21:21:43.944673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.026 21:21:44 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.026 21:21:44 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:55.026 21:21:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.026 Malloc0 00:05:55.026 21:21:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.287 Malloc1 00:05:55.287 21:21:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.287 21:21:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.287 21:21:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.287 21:21:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.287 21:21:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.287 21:21:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.287 21:21:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.287 21:21:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.287 21:21:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.287 21:21:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.287 21:21:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.287 21:21:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.287 21:21:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.287 21:21:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.287 21:21:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.287 21:21:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.548 /dev/nbd0 00:05:55.548 21:21:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.548 21:21:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.548 21:21:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:55.548 21:21:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:55.548 21:21:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.548 21:21:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.548 21:21:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:55.548 21:21:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:55.548 21:21:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.548 21:21:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.548 21:21:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.548 1+0 records in 00:05:55.548 1+0 records out 00:05:55.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252634 s, 16.2 MB/s 00:05:55.548 21:21:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.548 21:21:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:55.549 21:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.549 21:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.549 21:21:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:55.549 /dev/nbd1 00:05:55.549 21:21:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:55.549 21:21:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.549 1+0 records in 00:05:55.549 1+0 records out 00:05:55.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023879 s, 17.2 MB/s 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:55.549 21:21:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:55.549 21:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.549 21:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.549 21:21:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.549 21:21:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.549 21:21:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.809 { 00:05:55.809 "nbd_device": "/dev/nbd0", 00:05:55.809 "bdev_name": "Malloc0" 00:05:55.809 }, 00:05:55.809 { 00:05:55.809 "nbd_device": "/dev/nbd1", 00:05:55.809 "bdev_name": "Malloc1" 00:05:55.809 } 00:05:55.809 ]' 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.809 { 00:05:55.809 "nbd_device": "/dev/nbd0", 00:05:55.809 "bdev_name": "Malloc0" 00:05:55.809 }, 00:05:55.809 { 00:05:55.809 "nbd_device": "/dev/nbd1", 00:05:55.809 "bdev_name": "Malloc1" 00:05:55.809 } 00:05:55.809 ]' 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.809 /dev/nbd1' 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.809 /dev/nbd1' 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.809 256+0 records in 00:05:55.809 256+0 records out 00:05:55.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012474 s, 84.1 MB/s 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.809 256+0 records in 00:05:55.809 256+0 records out 00:05:55.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226243 s, 46.3 MB/s 00:05:55.809 21:21:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.810 21:21:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.810 256+0 records in 00:05:55.810 256+0 records out 00:05:55.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01716 s, 61.1 MB/s 00:05:55.810 21:21:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.810 21:21:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.810 21:21:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.810 21:21:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.810 21:21:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.810 21:21:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.810 21:21:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.810 21:21:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.810 21:21:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.810 21:21:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.810 21:21:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.070 21:21:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.070 21:21:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.070 21:21:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.070 21:21:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.070 21:21:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.070 21:21:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.070 21:21:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.070 21:21:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.070 21:21:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.070 21:21:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.071 21:21:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.071 21:21:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.071 21:21:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.071 21:21:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.071 21:21:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.071 21:21:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.071 21:21:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.071 21:21:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.331 21:21:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.331 21:21:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.331 21:21:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.331 21:21:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.331 21:21:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.331 21:21:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.331 21:21:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.331 21:21:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.331 21:21:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.331 21:21:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.331 21:21:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.331 21:21:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.592 21:21:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.592 21:21:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.592 21:21:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.592 21:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.592 21:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.592 21:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.592 21:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.592 21:21:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.592 21:21:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.592 21:21:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.592 21:21:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.592 21:21:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.592 21:21:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.853 [2024-07-15 21:21:46.482568] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.853 [2024-07-15 21:21:46.545176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.853 [2024-07-15 21:21:46.545198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.853 [2024-07-15 21:21:46.577025] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.853 [2024-07-15 21:21:46.577060] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.150 21:21:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.150 21:21:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:00.150 spdk_app_start Round 1 00:06:00.150 21:21:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1967088 /var/tmp/spdk-nbd.sock 00:06:00.150 21:21:49 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1967088 ']' 00:06:00.150 21:21:49 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.150 21:21:49 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.150 21:21:49 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.150 21:21:49 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.150 21:21:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.150 21:21:49 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.150 21:21:49 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:00.150 21:21:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.150 Malloc0 00:06:00.150 21:21:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.150 Malloc1 00:06:00.150 21:21:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.150 21:21:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.150 21:21:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.150 21:21:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.150 21:21:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.150 21:21:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.150 21:21:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.150 21:21:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.150 21:21:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.150 21:21:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.150 21:21:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.150 21:21:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.150 21:21:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.150 21:21:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.150 21:21:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.150 21:21:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.409 /dev/nbd0 00:06:00.409 21:21:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.409 21:21:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.409 21:21:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:00.409 21:21:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:00.409 21:21:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:00.409 21:21:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:00.409 21:21:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:00.409 21:21:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:00.409 21:21:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:00.409 21:21:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:00.409 21:21:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.409 1+0 records in 00:06:00.409 1+0 records out 00:06:00.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297916 s, 13.7 MB/s 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:00.409 21:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.409 21:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.409 21:21:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.409 /dev/nbd1 00:06:00.409 21:21:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.409 21:21:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.409 1+0 records in 00:06:00.409 1+0 records out 00:06:00.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250315 s, 16.4 MB/s 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:00.409 21:21:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:00.409 21:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.409 21:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.409 21:21:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.409 21:21:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.409 21:21:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.669 { 00:06:00.669 "nbd_device": "/dev/nbd0", 00:06:00.669 "bdev_name": "Malloc0" 00:06:00.669 }, 00:06:00.669 { 00:06:00.669 "nbd_device": "/dev/nbd1", 00:06:00.669 "bdev_name": "Malloc1" 00:06:00.669 } 00:06:00.669 ]' 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.669 { 00:06:00.669 "nbd_device": "/dev/nbd0", 00:06:00.669 "bdev_name": "Malloc0" 00:06:00.669 }, 00:06:00.669 { 00:06:00.669 "nbd_device": "/dev/nbd1", 00:06:00.669 "bdev_name": "Malloc1" 00:06:00.669 } 00:06:00.669 ]' 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.669 /dev/nbd1' 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.669 /dev/nbd1' 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.669 256+0 records in 00:06:00.669 256+0 records out 00:06:00.669 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122814 s, 85.4 MB/s 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.669 256+0 records in 00:06:00.669 256+0 records out 00:06:00.669 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161989 s, 64.7 MB/s 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.669 21:21:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.928 256+0 records in 00:06:00.928 256+0 records out 00:06:00.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200482 s, 52.3 MB/s 00:06:00.928 21:21:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.928 21:21:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.928 21:21:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.928 21:21:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.928 21:21:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.928 21:21:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.928 21:21:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.928 21:21:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.928 21:21:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.928 21:21:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.928 21:21:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.928 21:21:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.928 21:21:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.928 21:21:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.929 21:21:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.929 21:21:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.929 21:21:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.929 21:21:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.929 21:21:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.929 21:21:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.929 21:21:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.929 21:21:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.929 21:21:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.929 21:21:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.929 21:21:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.929 21:21:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.929 21:21:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.929 21:21:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.929 21:21:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.187 21:21:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.187 21:21:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.187 21:21:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.187 21:21:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.187 21:21:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.187 21:21:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.187 21:21:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.187 21:21:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.187 21:21:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.187 21:21:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.187 21:21:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.446 21:21:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.446 21:21:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.446 21:21:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.446 21:21:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.446 21:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.446 21:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.446 21:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.446 21:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.446 21:21:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.446 21:21:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.446 21:21:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.446 21:21:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.446 21:21:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.446 21:21:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.705 [2024-07-15 21:21:51.341421] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.705 [2024-07-15 21:21:51.403656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.705 [2024-07-15 21:21:51.403660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.705 [2024-07-15 21:21:51.436380] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.705 [2024-07-15 21:21:51.436415] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.999 21:21:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.999 21:21:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:04.999 spdk_app_start Round 2 00:06:04.999 21:21:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1967088 /var/tmp/spdk-nbd.sock 00:06:04.999 21:21:54 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1967088 ']' 00:06:04.999 21:21:54 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.999 21:21:54 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.999 21:21:54 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.999 21:21:54 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.999 21:21:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.999 21:21:54 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.999 21:21:54 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:04.999 21:21:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.999 Malloc0 00:06:04.999 21:21:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.999 Malloc1 00:06:04.999 21:21:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.999 21:21:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.999 21:21:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.999 21:21:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.999 21:21:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.999 21:21:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.999 21:21:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.999 21:21:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.999 21:21:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.999 21:21:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.999 21:21:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.999 21:21:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.999 21:21:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.999 21:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.999 21:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.999 21:21:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.259 /dev/nbd0 00:06:05.259 21:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.259 21:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.259 21:21:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:05.259 21:21:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:05.259 21:21:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:05.259 21:21:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:05.259 21:21:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:05.259 21:21:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:05.259 21:21:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:05.259 21:21:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:05.259 21:21:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.259 1+0 records in 00:06:05.259 1+0 records out 00:06:05.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247514 s, 16.5 MB/s 00:06:05.259 21:21:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.259 21:21:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:05.259 21:21:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.259 21:21:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:05.259 21:21:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:05.259 21:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.259 21:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.259 21:21:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.259 /dev/nbd1 00:06:05.259 21:21:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.259 21:21:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.259 21:21:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:05.259 21:21:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:05.259 21:21:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:05.259 21:21:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:05.259 21:21:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:05.259 21:21:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:05.259 21:21:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:05.259 21:21:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:05.259 21:21:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.259 1+0 records in 00:06:05.259 1+0 records out 00:06:05.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275806 s, 14.9 MB/s 00:06:05.259 21:21:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.259 21:21:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:05.259 21:21:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.259 21:21:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:05.259 21:21:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:05.259 21:21:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.259 21:21:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.259 21:21:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.259 21:21:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.519 { 00:06:05.519 "nbd_device": "/dev/nbd0", 00:06:05.519 "bdev_name": "Malloc0" 00:06:05.519 }, 00:06:05.519 { 00:06:05.519 "nbd_device": "/dev/nbd1", 00:06:05.519 "bdev_name": "Malloc1" 00:06:05.519 } 00:06:05.519 ]' 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.519 { 00:06:05.519 "nbd_device": "/dev/nbd0", 00:06:05.519 "bdev_name": "Malloc0" 00:06:05.519 }, 00:06:05.519 { 00:06:05.519 "nbd_device": "/dev/nbd1", 00:06:05.519 "bdev_name": "Malloc1" 00:06:05.519 } 00:06:05.519 ]' 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.519 /dev/nbd1' 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.519 /dev/nbd1' 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.519 256+0 records in 00:06:05.519 256+0 records out 00:06:05.519 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116635 s, 89.9 MB/s 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.519 256+0 records in 00:06:05.519 256+0 records out 00:06:05.519 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158696 s, 66.1 MB/s 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.519 21:21:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.779 256+0 records in 00:06:05.779 256+0 records out 00:06:05.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167349 s, 62.7 MB/s 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.779 21:21:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.038 21:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.038 21:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.038 21:21:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.038 21:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.038 21:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.038 21:21:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.038 21:21:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.038 21:21:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.038 21:21:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.038 21:21:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.038 21:21:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.297 21:21:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.297 21:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.297 21:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.297 21:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.297 21:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.297 21:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.297 21:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.297 21:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.297 21:21:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.297 21:21:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.297 21:21:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.297 21:21:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.297 21:21:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.297 21:21:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.556 [2024-07-15 21:21:56.205964] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.556 [2024-07-15 21:21:56.268129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.556 [2024-07-15 21:21:56.268141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.556 [2024-07-15 21:21:56.299780] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.556 [2024-07-15 21:21:56.299814] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.849 21:21:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1967088 /var/tmp/spdk-nbd.sock 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1967088 ']' 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:09.849 21:21:59 event.app_repeat -- event/event.sh@39 -- # killprocess 1967088 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1967088 ']' 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1967088 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1967088 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1967088' 00:06:09.849 killing process with pid 1967088 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1967088 00:06:09.849 21:21:59 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1967088 00:06:09.849 spdk_app_start is called in Round 0. 00:06:09.849 Shutdown signal received, stop current app iteration 00:06:09.849 Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 reinitialization... 00:06:09.849 spdk_app_start is called in Round 1. 00:06:09.850 Shutdown signal received, stop current app iteration 00:06:09.850 Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 reinitialization... 00:06:09.850 spdk_app_start is called in Round 2. 00:06:09.850 Shutdown signal received, stop current app iteration 00:06:09.850 Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 reinitialization... 00:06:09.850 spdk_app_start is called in Round 3. 00:06:09.850 Shutdown signal received, stop current app iteration 00:06:09.850 21:21:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:09.850 21:21:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:09.850 00:06:09.850 real 0m15.625s 00:06:09.850 user 0m33.737s 00:06:09.850 sys 0m2.061s 00:06:09.850 21:21:59 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.850 21:21:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.850 ************************************ 00:06:09.850 END TEST app_repeat 00:06:09.850 ************************************ 00:06:09.850 21:21:59 event -- common/autotest_common.sh@1142 -- # return 0 00:06:09.850 21:21:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:09.850 21:21:59 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:09.850 21:21:59 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.850 21:21:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.850 21:21:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.850 ************************************ 00:06:09.850 START TEST cpu_locks 00:06:09.850 ************************************ 00:06:09.850 21:21:59 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:09.850 * Looking for test storage... 00:06:09.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:09.850 21:21:59 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:09.850 21:21:59 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:09.850 21:21:59 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:09.850 21:21:59 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:09.850 21:21:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.850 21:21:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.850 21:21:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.850 ************************************ 00:06:09.850 START TEST default_locks 00:06:09.850 ************************************ 00:06:09.850 21:21:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:09.850 21:21:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1970343 00:06:09.850 21:21:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1970343 00:06:09.850 21:21:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.850 21:21:59 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1970343 ']' 00:06:09.850 21:21:59 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.850 21:21:59 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.850 21:21:59 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.850 21:21:59 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.850 21:21:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.110 [2024-07-15 21:21:59.671767] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:10.110 [2024-07-15 21:21:59.671834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1970343 ] 00:06:10.110 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.110 [2024-07-15 21:21:59.737454] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.110 [2024-07-15 21:21:59.813150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.051 21:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.051 21:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:11.051 21:22:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1970343 00:06:11.051 21:22:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1970343 00:06:11.051 21:22:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.311 lslocks: write error 00:06:11.311 21:22:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1970343 00:06:11.311 21:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1970343 ']' 00:06:11.311 21:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1970343 00:06:11.311 21:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:11.312 21:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.312 21:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1970343 00:06:11.312 21:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.312 21:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.312 21:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1970343' 00:06:11.312 killing process with pid 1970343 00:06:11.312 21:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1970343 00:06:11.312 21:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1970343 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1970343 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1970343 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1970343 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1970343 ']' 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1970343) - No such process 00:06:11.572 ERROR: process (pid: 1970343) is no longer running 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.572 00:06:11.572 real 0m1.565s 00:06:11.572 user 0m1.706s 00:06:11.572 sys 0m0.523s 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.572 21:22:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.572 ************************************ 00:06:11.572 END TEST default_locks 00:06:11.572 ************************************ 00:06:11.572 21:22:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:11.572 21:22:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:11.572 21:22:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.572 21:22:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.572 21:22:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.572 ************************************ 00:06:11.572 START TEST default_locks_via_rpc 00:06:11.572 ************************************ 00:06:11.572 21:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:11.572 21:22:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1970710 00:06:11.572 21:22:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1970710 00:06:11.572 21:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1970710 ']' 00:06:11.572 21:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.572 21:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.572 21:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.572 21:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.572 21:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.572 21:22:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.572 [2024-07-15 21:22:01.300113] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:11.572 [2024-07-15 21:22:01.300165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1970710 ] 00:06:11.572 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.572 [2024-07-15 21:22:01.357994] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.833 [2024-07-15 21:22:01.423043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.403 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.403 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:12.403 21:22:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:12.403 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.403 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.403 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.403 21:22:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:12.403 21:22:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:12.403 21:22:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:12.403 21:22:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:12.403 21:22:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.403 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.404 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.404 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.404 21:22:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1970710 00:06:12.404 21:22:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1970710 00:06:12.404 21:22:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.975 21:22:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1970710 00:06:12.975 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1970710 ']' 00:06:12.975 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1970710 00:06:12.975 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:12.975 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.975 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1970710 00:06:12.975 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.975 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.975 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1970710' 00:06:12.975 killing process with pid 1970710 00:06:12.975 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1970710 00:06:12.975 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1970710 00:06:12.975 00:06:12.975 real 0m1.521s 00:06:12.975 user 0m1.597s 00:06:12.975 sys 0m0.510s 00:06:12.975 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.975 21:22:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.975 ************************************ 00:06:12.975 END TEST default_locks_via_rpc 00:06:12.975 ************************************ 00:06:13.236 21:22:02 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:13.236 21:22:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:13.236 21:22:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.236 21:22:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.236 21:22:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.236 ************************************ 00:06:13.236 START TEST non_locking_app_on_locked_coremask 00:06:13.236 ************************************ 00:06:13.236 21:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:13.236 21:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1971079 00:06:13.236 21:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1971079 /var/tmp/spdk.sock 00:06:13.236 21:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.236 21:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1971079 ']' 00:06:13.236 21:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.236 21:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.236 21:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.236 21:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.236 21:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.236 [2024-07-15 21:22:02.896350] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:13.236 [2024-07-15 21:22:02.896401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971079 ] 00:06:13.236 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.236 [2024-07-15 21:22:02.955134] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.236 [2024-07-15 21:22:03.018881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.177 21:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.177 21:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:14.177 21:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:14.177 21:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1971297 00:06:14.177 21:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1971297 /var/tmp/spdk2.sock 00:06:14.177 21:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1971297 ']' 00:06:14.177 21:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.177 21:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.177 21:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.177 21:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.177 21:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.177 [2024-07-15 21:22:03.709993] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:14.177 [2024-07-15 21:22:03.710048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971297 ] 00:06:14.177 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.177 [2024-07-15 21:22:03.799459] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.177 [2024-07-15 21:22:03.799488] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.177 [2024-07-15 21:22:03.932796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.746 21:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.746 21:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:14.746 21:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1971079 00:06:14.746 21:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1971079 00:06:14.746 21:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.316 lslocks: write error 00:06:15.316 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1971079 00:06:15.316 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1971079 ']' 00:06:15.316 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1971079 00:06:15.316 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:15.316 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.316 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1971079 00:06:15.316 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.316 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.316 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1971079' 00:06:15.316 killing process with pid 1971079 00:06:15.316 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1971079 00:06:15.316 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1971079 00:06:15.886 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1971297 00:06:15.886 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1971297 ']' 00:06:15.886 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1971297 00:06:15.886 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:15.886 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.886 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1971297 00:06:15.886 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.886 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.886 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1971297' 00:06:15.886 killing process with pid 1971297 00:06:15.886 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1971297 00:06:15.886 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1971297 00:06:16.146 00:06:16.146 real 0m2.936s 00:06:16.146 user 0m3.219s 00:06:16.146 sys 0m0.853s 00:06:16.146 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.146 21:22:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.146 ************************************ 00:06:16.146 END TEST non_locking_app_on_locked_coremask 00:06:16.146 ************************************ 00:06:16.146 21:22:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:16.146 21:22:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:16.147 21:22:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.147 21:22:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.147 21:22:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.147 ************************************ 00:06:16.147 START TEST locking_app_on_unlocked_coremask 00:06:16.147 ************************************ 00:06:16.147 21:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:16.147 21:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1971782 00:06:16.147 21:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1971782 /var/tmp/spdk.sock 00:06:16.147 21:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:16.147 21:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1971782 ']' 00:06:16.147 21:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.147 21:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.147 21:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.147 21:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.147 21:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.147 [2024-07-15 21:22:05.908456] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:16.147 [2024-07-15 21:22:05.908508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971782 ] 00:06:16.147 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.406 [2024-07-15 21:22:05.968845] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.406 [2024-07-15 21:22:05.968873] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.406 [2024-07-15 21:22:06.034755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.975 21:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.975 21:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:16.975 21:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.975 21:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1971816 00:06:16.975 21:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1971816 /var/tmp/spdk2.sock 00:06:16.975 21:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1971816 ']' 00:06:16.975 21:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.976 21:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.976 21:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.976 21:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.976 21:22:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.976 [2024-07-15 21:22:06.734356] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:16.976 [2024-07-15 21:22:06.734414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971816 ] 00:06:16.976 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.236 [2024-07-15 21:22:06.821426] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.236 [2024-07-15 21:22:06.955838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.808 21:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.808 21:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:17.808 21:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1971816 00:06:17.808 21:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1971816 00:06:17.808 21:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.379 lslocks: write error 00:06:18.379 21:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1971782 00:06:18.379 21:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1971782 ']' 00:06:18.379 21:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1971782 00:06:18.379 21:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:18.379 21:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.379 21:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1971782 00:06:18.379 21:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.379 21:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.379 21:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1971782' 00:06:18.379 killing process with pid 1971782 00:06:18.379 21:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1971782 00:06:18.379 21:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1971782 00:06:18.640 21:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1971816 00:06:18.640 21:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1971816 ']' 00:06:18.640 21:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1971816 00:06:18.640 21:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:18.640 21:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.640 21:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1971816 00:06:18.901 21:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.901 21:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.901 21:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1971816' 00:06:18.901 killing process with pid 1971816 00:06:18.901 21:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1971816 00:06:18.901 21:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1971816 00:06:18.901 00:06:18.901 real 0m2.833s 00:06:18.901 user 0m3.107s 00:06:18.901 sys 0m0.832s 00:06:18.901 21:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.901 21:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.901 ************************************ 00:06:18.901 END TEST locking_app_on_unlocked_coremask 00:06:18.901 ************************************ 00:06:19.162 21:22:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:19.162 21:22:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:19.162 21:22:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.162 21:22:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.162 21:22:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.162 ************************************ 00:06:19.162 START TEST locking_app_on_locked_coremask 00:06:19.162 ************************************ 00:06:19.162 21:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:19.162 21:22:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1972390 00:06:19.162 21:22:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1972390 /var/tmp/spdk.sock 00:06:19.162 21:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1972390 ']' 00:06:19.162 21:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.162 21:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.162 21:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.162 21:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.162 21:22:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.162 21:22:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.162 [2024-07-15 21:22:08.818099] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:19.163 [2024-07-15 21:22:08.818154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972390 ] 00:06:19.163 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.163 [2024-07-15 21:22:08.877603] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.163 [2024-07-15 21:22:08.944911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1972501 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1972501 /var/tmp/spdk2.sock 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1972501 /var/tmp/spdk2.sock 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1972501 /var/tmp/spdk2.sock 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1972501 ']' 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.106 21:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.106 [2024-07-15 21:22:09.597208] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:20.107 [2024-07-15 21:22:09.597258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972501 ] 00:06:20.107 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.107 [2024-07-15 21:22:09.684946] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1972390 has claimed it. 00:06:20.107 [2024-07-15 21:22:09.684982] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1972501) - No such process 00:06:20.677 ERROR: process (pid: 1972501) is no longer running 00:06:20.677 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.677 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:20.677 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:20.677 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.677 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:20.677 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.678 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1972390 00:06:20.678 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1972390 00:06:20.678 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.938 lslocks: write error 00:06:20.938 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1972390 00:06:20.938 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1972390 ']' 00:06:20.938 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1972390 00:06:20.938 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:20.938 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.938 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1972390 00:06:21.199 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.199 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.199 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1972390' 00:06:21.199 killing process with pid 1972390 00:06:21.199 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1972390 00:06:21.199 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1972390 00:06:21.199 00:06:21.199 real 0m2.225s 00:06:21.199 user 0m2.450s 00:06:21.199 sys 0m0.615s 00:06:21.199 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.199 21:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.199 ************************************ 00:06:21.199 END TEST locking_app_on_locked_coremask 00:06:21.199 ************************************ 00:06:21.461 21:22:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:21.461 21:22:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:21.461 21:22:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.461 21:22:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.461 21:22:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.461 ************************************ 00:06:21.461 START TEST locking_overlapped_coremask 00:06:21.461 ************************************ 00:06:21.461 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:21.461 21:22:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1972868 00:06:21.461 21:22:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1972868 /var/tmp/spdk.sock 00:06:21.461 21:22:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:21.461 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1972868 ']' 00:06:21.461 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.461 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.461 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.461 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.461 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.461 [2024-07-15 21:22:11.109049] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:21.461 [2024-07-15 21:22:11.109098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972868 ] 00:06:21.461 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.461 [2024-07-15 21:22:11.168284] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.461 [2024-07-15 21:22:11.233253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.461 [2024-07-15 21:22:11.233367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.461 [2024-07-15 21:22:11.233369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1972979 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1972979 /var/tmp/spdk2.sock 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1972979 /var/tmp/spdk2.sock 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1972979 /var/tmp/spdk2.sock 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1972979 ']' 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.407 21:22:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.407 [2024-07-15 21:22:11.932186] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:22.407 [2024-07-15 21:22:11.932242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972979 ] 00:06:22.407 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.407 [2024-07-15 21:22:12.003156] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1972868 has claimed it. 00:06:22.407 [2024-07-15 21:22:12.003196] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:22.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1972979) - No such process 00:06:22.981 ERROR: process (pid: 1972979) is no longer running 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1972868 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1972868 ']' 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1972868 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1972868 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1972868' 00:06:22.981 killing process with pid 1972868 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1972868 00:06:22.981 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1972868 00:06:23.242 00:06:23.242 real 0m1.758s 00:06:23.242 user 0m4.996s 00:06:23.242 sys 0m0.362s 00:06:23.242 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.242 21:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.242 ************************************ 00:06:23.242 END TEST locking_overlapped_coremask 00:06:23.242 ************************************ 00:06:23.242 21:22:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:23.242 21:22:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:23.242 21:22:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.242 21:22:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.242 21:22:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.242 ************************************ 00:06:23.242 START TEST locking_overlapped_coremask_via_rpc 00:06:23.242 ************************************ 00:06:23.242 21:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:23.242 21:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1973239 00:06:23.242 21:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1973239 /var/tmp/spdk.sock 00:06:23.242 21:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:23.242 21:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1973239 ']' 00:06:23.242 21:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.242 21:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.242 21:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.242 21:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.242 21:22:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.242 [2024-07-15 21:22:12.941083] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:23.242 [2024-07-15 21:22:12.941139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973239 ] 00:06:23.242 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.242 [2024-07-15 21:22:13.000524] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.242 [2024-07-15 21:22:13.000552] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.503 [2024-07-15 21:22:13.065413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.503 [2024-07-15 21:22:13.065529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.503 [2024-07-15 21:22:13.065531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.148 21:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.148 21:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:24.148 21:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1973426 00:06:24.148 21:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1973426 /var/tmp/spdk2.sock 00:06:24.148 21:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1973426 ']' 00:06:24.148 21:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:24.149 21:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.149 21:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.149 21:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.149 21:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.149 21:22:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.149 [2024-07-15 21:22:13.772450] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:24.149 [2024-07-15 21:22:13.772506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973426 ] 00:06:24.149 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.149 [2024-07-15 21:22:13.842649] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.149 [2024-07-15 21:22:13.842673] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.149 [2024-07-15 21:22:13.952484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.149 [2024-07-15 21:22:13.952639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.149 [2024-07-15 21:22:13.952641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:24.719 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.719 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:24.719 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.719 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.719 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.719 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.719 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.979 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:24.979 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.979 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:24.979 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.979 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:24.979 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.979 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.979 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.979 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.979 [2024-07-15 21:22:14.534187] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1973239 has claimed it. 00:06:24.979 request: 00:06:24.979 { 00:06:24.979 "method": "framework_enable_cpumask_locks", 00:06:24.979 "req_id": 1 00:06:24.979 } 00:06:24.979 Got JSON-RPC error response 00:06:24.980 response: 00:06:24.980 { 00:06:24.980 "code": -32603, 00:06:24.980 "message": "Failed to claim CPU core: 2" 00:06:24.980 } 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1973239 /var/tmp/spdk.sock 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1973239 ']' 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1973426 /var/tmp/spdk2.sock 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1973426 ']' 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.980 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.240 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.240 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:25.240 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:25.240 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:25.240 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:25.240 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:25.240 00:06:25.240 real 0m1.981s 00:06:25.240 user 0m0.768s 00:06:25.240 sys 0m0.147s 00:06:25.240 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.240 21:22:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.240 ************************************ 00:06:25.240 END TEST locking_overlapped_coremask_via_rpc 00:06:25.240 ************************************ 00:06:25.240 21:22:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:25.240 21:22:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:25.240 21:22:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1973239 ]] 00:06:25.240 21:22:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1973239 00:06:25.240 21:22:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1973239 ']' 00:06:25.240 21:22:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1973239 00:06:25.240 21:22:14 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:25.240 21:22:14 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.240 21:22:14 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1973239 00:06:25.240 21:22:14 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.240 21:22:14 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.240 21:22:14 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1973239' 00:06:25.240 killing process with pid 1973239 00:06:25.240 21:22:14 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1973239 00:06:25.240 21:22:14 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1973239 00:06:25.500 21:22:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1973426 ]] 00:06:25.500 21:22:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1973426 00:06:25.500 21:22:15 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1973426 ']' 00:06:25.500 21:22:15 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1973426 00:06:25.500 21:22:15 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:25.500 21:22:15 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.500 21:22:15 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1973426 00:06:25.500 21:22:15 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:25.500 21:22:15 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:25.500 21:22:15 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1973426' 00:06:25.500 killing process with pid 1973426 00:06:25.500 21:22:15 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1973426 00:06:25.500 21:22:15 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1973426 00:06:25.760 21:22:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:25.760 21:22:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:25.760 21:22:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1973239 ]] 00:06:25.760 21:22:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1973239 00:06:25.760 21:22:15 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1973239 ']' 00:06:25.760 21:22:15 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1973239 00:06:25.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1973239) - No such process 00:06:25.760 21:22:15 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1973239 is not found' 00:06:25.760 Process with pid 1973239 is not found 00:06:25.760 21:22:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1973426 ]] 00:06:25.760 21:22:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1973426 00:06:25.760 21:22:15 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1973426 ']' 00:06:25.760 21:22:15 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1973426 00:06:25.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1973426) - No such process 00:06:25.760 21:22:15 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1973426 is not found' 00:06:25.760 Process with pid 1973426 is not found 00:06:25.760 21:22:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:25.760 00:06:25.760 real 0m15.953s 00:06:25.760 user 0m27.341s 00:06:25.760 sys 0m4.675s 00:06:25.760 21:22:15 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.760 21:22:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.760 ************************************ 00:06:25.760 END TEST cpu_locks 00:06:25.760 ************************************ 00:06:25.760 21:22:15 event -- common/autotest_common.sh@1142 -- # return 0 00:06:25.760 00:06:25.760 real 0m41.483s 00:06:25.760 user 1m20.420s 00:06:25.760 sys 0m7.713s 00:06:25.760 21:22:15 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.760 21:22:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.760 ************************************ 00:06:25.760 END TEST event 00:06:25.760 ************************************ 00:06:25.760 21:22:15 -- common/autotest_common.sh@1142 -- # return 0 00:06:25.760 21:22:15 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:25.760 21:22:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.760 21:22:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.760 21:22:15 -- common/autotest_common.sh@10 -- # set +x 00:06:25.760 ************************************ 00:06:25.760 START TEST thread 00:06:25.760 ************************************ 00:06:25.760 21:22:15 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:26.020 * Looking for test storage... 00:06:26.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:26.020 21:22:15 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.020 21:22:15 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:26.020 21:22:15 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.020 21:22:15 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.020 ************************************ 00:06:26.020 START TEST thread_poller_perf 00:06:26.020 ************************************ 00:06:26.020 21:22:15 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.020 [2024-07-15 21:22:15.707273] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:26.020 [2024-07-15 21:22:15.707383] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974002 ] 00:06:26.020 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.020 [2024-07-15 21:22:15.773597] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.280 [2024-07-15 21:22:15.847603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.280 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:27.221 ====================================== 00:06:27.221 busy:2407326878 (cyc) 00:06:27.221 total_run_count: 287000 00:06:27.221 tsc_hz: 2400000000 (cyc) 00:06:27.221 ====================================== 00:06:27.221 poller_cost: 8387 (cyc), 3494 (nsec) 00:06:27.221 00:06:27.221 real 0m1.223s 00:06:27.221 user 0m1.143s 00:06:27.221 sys 0m0.077s 00:06:27.221 21:22:16 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.221 21:22:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.221 ************************************ 00:06:27.221 END TEST thread_poller_perf 00:06:27.221 ************************************ 00:06:27.221 21:22:16 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:27.221 21:22:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:27.221 21:22:16 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:27.221 21:22:16 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.221 21:22:16 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.221 ************************************ 00:06:27.221 START TEST thread_poller_perf 00:06:27.221 ************************************ 00:06:27.221 21:22:16 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:27.221 [2024-07-15 21:22:17.006853] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:27.221 [2024-07-15 21:22:17.006943] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974169 ] 00:06:27.481 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.481 [2024-07-15 21:22:17.072617] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.481 [2024-07-15 21:22:17.142950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.481 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:28.450 ====================================== 00:06:28.450 busy:2401999978 (cyc) 00:06:28.450 total_run_count: 3802000 00:06:28.450 tsc_hz: 2400000000 (cyc) 00:06:28.450 ====================================== 00:06:28.450 poller_cost: 631 (cyc), 262 (nsec) 00:06:28.450 00:06:28.450 real 0m1.212s 00:06:28.450 user 0m1.143s 00:06:28.450 sys 0m0.065s 00:06:28.450 21:22:18 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.450 21:22:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:28.450 ************************************ 00:06:28.450 END TEST thread_poller_perf 00:06:28.450 ************************************ 00:06:28.450 21:22:18 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:28.450 21:22:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:28.450 00:06:28.450 real 0m2.692s 00:06:28.450 user 0m2.375s 00:06:28.450 sys 0m0.325s 00:06:28.450 21:22:18 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.450 21:22:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.450 ************************************ 00:06:28.450 END TEST thread 00:06:28.450 ************************************ 00:06:28.711 21:22:18 -- common/autotest_common.sh@1142 -- # return 0 00:06:28.712 21:22:18 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:28.712 21:22:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.712 21:22:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.712 21:22:18 -- common/autotest_common.sh@10 -- # set +x 00:06:28.712 ************************************ 00:06:28.712 START TEST accel 00:06:28.712 ************************************ 00:06:28.712 21:22:18 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:28.712 * Looking for test storage... 00:06:28.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:28.712 21:22:18 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:28.712 21:22:18 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:28.712 21:22:18 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:28.712 21:22:18 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1974445 00:06:28.712 21:22:18 accel -- accel/accel.sh@63 -- # waitforlisten 1974445 00:06:28.712 21:22:18 accel -- common/autotest_common.sh@829 -- # '[' -z 1974445 ']' 00:06:28.712 21:22:18 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.712 21:22:18 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.712 21:22:18 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.712 21:22:18 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:28.712 21:22:18 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.712 21:22:18 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:28.712 21:22:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.712 21:22:18 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.712 21:22:18 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.712 21:22:18 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.712 21:22:18 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.712 21:22:18 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.712 21:22:18 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:28.712 21:22:18 accel -- accel/accel.sh@41 -- # jq -r . 00:06:28.712 [2024-07-15 21:22:18.467588] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:28.712 [2024-07-15 21:22:18.467646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974445 ] 00:06:28.712 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.972 [2024-07-15 21:22:18.530436] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.972 [2024-07-15 21:22:18.599825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.543 21:22:19 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.543 21:22:19 accel -- common/autotest_common.sh@862 -- # return 0 00:06:29.543 21:22:19 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:29.543 21:22:19 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:29.543 21:22:19 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:29.543 21:22:19 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:29.543 21:22:19 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:29.543 21:22:19 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:29.543 21:22:19 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:29.543 21:22:19 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.543 21:22:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.543 21:22:19 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.543 21:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.543 21:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.543 21:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.543 21:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.543 21:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.543 21:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.543 21:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.543 21:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.543 21:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.543 21:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.543 21:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.543 21:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.543 21:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.543 21:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.543 21:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.543 21:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.543 21:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.543 21:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.543 21:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.543 21:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.543 21:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.543 21:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.543 21:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.543 21:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.543 21:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.543 21:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.543 21:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.543 21:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.543 21:22:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.543 21:22:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.544 21:22:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.544 21:22:19 accel -- accel/accel.sh@75 -- # killprocess 1974445 00:06:29.544 21:22:19 accel -- common/autotest_common.sh@948 -- # '[' -z 1974445 ']' 00:06:29.544 21:22:19 accel -- common/autotest_common.sh@952 -- # kill -0 1974445 00:06:29.544 21:22:19 accel -- common/autotest_common.sh@953 -- # uname 00:06:29.544 21:22:19 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.544 21:22:19 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1974445 00:06:29.544 21:22:19 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.544 21:22:19 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.544 21:22:19 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1974445' 00:06:29.544 killing process with pid 1974445 00:06:29.544 21:22:19 accel -- common/autotest_common.sh@967 -- # kill 1974445 00:06:29.544 21:22:19 accel -- common/autotest_common.sh@972 -- # wait 1974445 00:06:29.804 21:22:19 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:29.804 21:22:19 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:29.804 21:22:19 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:29.804 21:22:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.804 21:22:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.804 21:22:19 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:29.804 21:22:19 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:29.804 21:22:19 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:29.804 21:22:19 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.804 21:22:19 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.804 21:22:19 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.804 21:22:19 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.804 21:22:19 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.804 21:22:19 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:29.804 21:22:19 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:30.066 21:22:19 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.066 21:22:19 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:30.066 21:22:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.066 21:22:19 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:30.066 21:22:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:30.066 21:22:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.066 21:22:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.066 ************************************ 00:06:30.066 START TEST accel_missing_filename 00:06:30.066 ************************************ 00:06:30.066 21:22:19 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:30.066 21:22:19 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:30.066 21:22:19 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:30.066 21:22:19 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.066 21:22:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.066 21:22:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.066 21:22:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.066 21:22:19 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:30.066 21:22:19 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:30.066 21:22:19 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:30.066 21:22:19 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.066 21:22:19 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.066 21:22:19 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.066 21:22:19 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.066 21:22:19 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.066 21:22:19 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:30.066 21:22:19 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:30.066 [2024-07-15 21:22:19.711601] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:30.066 [2024-07-15 21:22:19.711668] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974807 ] 00:06:30.066 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.066 [2024-07-15 21:22:19.772757] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.066 [2024-07-15 21:22:19.836141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.066 [2024-07-15 21:22:19.868011] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.328 [2024-07-15 21:22:19.904913] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:30.328 A filename is required. 00:06:30.328 21:22:19 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:30.328 21:22:19 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.328 21:22:19 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:30.328 21:22:19 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:30.328 21:22:19 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:30.328 21:22:19 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.328 00:06:30.328 real 0m0.277s 00:06:30.328 user 0m0.214s 00:06:30.328 sys 0m0.102s 00:06:30.328 21:22:19 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.328 21:22:19 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:30.328 ************************************ 00:06:30.328 END TEST accel_missing_filename 00:06:30.328 ************************************ 00:06:30.328 21:22:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.328 21:22:19 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.328 21:22:19 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:30.328 21:22:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.328 21:22:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.328 ************************************ 00:06:30.328 START TEST accel_compress_verify 00:06:30.328 ************************************ 00:06:30.328 21:22:20 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.328 21:22:20 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:30.328 21:22:20 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.328 21:22:20 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.328 21:22:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.328 21:22:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.328 21:22:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.328 21:22:20 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.328 21:22:20 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:30.328 21:22:20 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.328 21:22:20 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.328 21:22:20 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.328 21:22:20 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.328 21:22:20 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.328 21:22:20 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.328 21:22:20 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:30.328 21:22:20 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:30.328 [2024-07-15 21:22:20.058726] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:30.328 [2024-07-15 21:22:20.058818] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974841 ] 00:06:30.328 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.328 [2024-07-15 21:22:20.121213] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.589 [2024-07-15 21:22:20.186441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.589 [2024-07-15 21:22:20.218277] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.589 [2024-07-15 21:22:20.255186] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:30.589 00:06:30.589 Compression does not support the verify option, aborting. 00:06:30.589 21:22:20 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:30.589 21:22:20 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.589 21:22:20 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:30.589 21:22:20 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:30.589 21:22:20 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:30.589 21:22:20 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.589 00:06:30.589 real 0m0.280s 00:06:30.589 user 0m0.199s 00:06:30.589 sys 0m0.104s 00:06:30.589 21:22:20 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.589 21:22:20 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:30.589 ************************************ 00:06:30.589 END TEST accel_compress_verify 00:06:30.589 ************************************ 00:06:30.589 21:22:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.589 21:22:20 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:30.589 21:22:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:30.589 21:22:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.589 21:22:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.589 ************************************ 00:06:30.589 START TEST accel_wrong_workload 00:06:30.589 ************************************ 00:06:30.589 21:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:30.589 21:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:30.589 21:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:30.589 21:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.589 21:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.589 21:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.589 21:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.589 21:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:30.589 21:22:20 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:30.589 21:22:20 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:30.589 21:22:20 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.589 21:22:20 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.589 21:22:20 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.589 21:22:20 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.589 21:22:20 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.589 21:22:20 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:30.589 21:22:20 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:30.850 Unsupported workload type: foobar 00:06:30.850 [2024-07-15 21:22:20.403945] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:30.850 accel_perf options: 00:06:30.850 [-h help message] 00:06:30.850 [-q queue depth per core] 00:06:30.850 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:30.850 [-T number of threads per core 00:06:30.850 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:30.850 [-t time in seconds] 00:06:30.850 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:30.850 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:30.850 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:30.850 [-l for compress/decompress workloads, name of uncompressed input file 00:06:30.850 [-S for crc32c workload, use this seed value (default 0) 00:06:30.850 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:30.850 [-f for fill workload, use this BYTE value (default 255) 00:06:30.850 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:30.850 [-y verify result if this switch is on] 00:06:30.850 [-a tasks to allocate per core (default: same value as -q)] 00:06:30.850 Can be used to spread operations across a wider range of memory. 00:06:30.850 21:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:30.850 21:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.850 21:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.850 21:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.850 00:06:30.850 real 0m0.035s 00:06:30.850 user 0m0.018s 00:06:30.850 sys 0m0.017s 00:06:30.850 21:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.850 21:22:20 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:30.850 ************************************ 00:06:30.850 END TEST accel_wrong_workload 00:06:30.850 ************************************ 00:06:30.850 Error: writing output failed: Broken pipe 00:06:30.850 21:22:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.850 21:22:20 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:30.850 21:22:20 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:30.850 21:22:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.850 21:22:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.850 ************************************ 00:06:30.850 START TEST accel_negative_buffers 00:06:30.850 ************************************ 00:06:30.850 21:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:30.850 21:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:30.850 21:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:30.850 21:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.850 21:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.850 21:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.850 21:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.850 21:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:30.850 21:22:20 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:30.850 21:22:20 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:30.850 21:22:20 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.850 21:22:20 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.851 21:22:20 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.851 21:22:20 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.851 21:22:20 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.851 21:22:20 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:30.851 21:22:20 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:30.851 -x option must be non-negative. 00:06:30.851 [2024-07-15 21:22:20.511179] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:30.851 accel_perf options: 00:06:30.851 [-h help message] 00:06:30.851 [-q queue depth per core] 00:06:30.851 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:30.851 [-T number of threads per core 00:06:30.851 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:30.851 [-t time in seconds] 00:06:30.851 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:30.851 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:30.851 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:30.851 [-l for compress/decompress workloads, name of uncompressed input file 00:06:30.851 [-S for crc32c workload, use this seed value (default 0) 00:06:30.851 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:30.851 [-f for fill workload, use this BYTE value (default 255) 00:06:30.851 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:30.851 [-y verify result if this switch is on] 00:06:30.851 [-a tasks to allocate per core (default: same value as -q)] 00:06:30.851 Can be used to spread operations across a wider range of memory. 00:06:30.851 21:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:30.851 21:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.851 21:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.851 21:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.851 00:06:30.851 real 0m0.036s 00:06:30.851 user 0m0.025s 00:06:30.851 sys 0m0.011s 00:06:30.851 21:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.851 21:22:20 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:30.851 ************************************ 00:06:30.851 END TEST accel_negative_buffers 00:06:30.851 ************************************ 00:06:30.851 Error: writing output failed: Broken pipe 00:06:30.851 21:22:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.851 21:22:20 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:30.851 21:22:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:30.851 21:22:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.851 21:22:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.851 ************************************ 00:06:30.851 START TEST accel_crc32c 00:06:30.851 ************************************ 00:06:30.851 21:22:20 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:30.851 21:22:20 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:30.851 21:22:20 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:30.851 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.851 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.851 21:22:20 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:30.851 21:22:20 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:30.851 21:22:20 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:30.851 21:22:20 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.851 21:22:20 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.851 21:22:20 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.851 21:22:20 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.851 21:22:20 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.851 21:22:20 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:30.851 21:22:20 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:30.851 [2024-07-15 21:22:20.623213] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:30.851 [2024-07-15 21:22:20.623308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975114 ] 00:06:30.851 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.113 [2024-07-15 21:22:20.685514] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.113 [2024-07-15 21:22:20.751248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.113 21:22:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:32.498 21:22:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.498 00:06:32.498 real 0m1.286s 00:06:32.498 user 0m1.198s 00:06:32.498 sys 0m0.100s 00:06:32.499 21:22:21 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.499 21:22:21 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:32.499 ************************************ 00:06:32.499 END TEST accel_crc32c 00:06:32.499 ************************************ 00:06:32.499 21:22:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.499 21:22:21 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:32.499 21:22:21 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:32.499 21:22:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.499 21:22:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.499 ************************************ 00:06:32.499 START TEST accel_crc32c_C2 00:06:32.499 ************************************ 00:06:32.499 21:22:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:32.499 21:22:21 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.499 21:22:21 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:32.499 21:22:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:21 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:32.499 21:22:21 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:32.499 21:22:21 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.499 21:22:21 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.499 21:22:21 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.499 21:22:21 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.499 21:22:21 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.499 21:22:21 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.499 21:22:21 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.499 21:22:21 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:32.499 [2024-07-15 21:22:21.977671] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:32.499 [2024-07-15 21:22:21.977771] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975287 ] 00:06:32.499 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.499 [2024-07-15 21:22:22.040167] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.499 [2024-07-15 21:22:22.108641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.499 21:22:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.442 00:06:33.442 real 0m1.289s 00:06:33.442 user 0m1.201s 00:06:33.442 sys 0m0.100s 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.442 21:22:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:33.442 ************************************ 00:06:33.442 END TEST accel_crc32c_C2 00:06:33.442 ************************************ 00:06:33.702 21:22:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.702 21:22:23 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:33.702 21:22:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:33.702 21:22:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.702 21:22:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.702 ************************************ 00:06:33.702 START TEST accel_copy 00:06:33.702 ************************************ 00:06:33.702 21:22:23 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:33.702 [2024-07-15 21:22:23.335237] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:33.702 [2024-07-15 21:22:23.335344] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975607 ] 00:06:33.702 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.702 [2024-07-15 21:22:23.403949] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.702 [2024-07-15 21:22:23.467736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.702 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.961 21:22:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:34.902 21:22:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.902 00:06:34.902 real 0m1.292s 00:06:34.902 user 0m1.198s 00:06:34.902 sys 0m0.104s 00:06:34.902 21:22:24 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.902 21:22:24 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:34.902 ************************************ 00:06:34.902 END TEST accel_copy 00:06:34.902 ************************************ 00:06:34.902 21:22:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.902 21:22:24 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.902 21:22:24 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:34.902 21:22:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.902 21:22:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.902 ************************************ 00:06:34.902 START TEST accel_fill 00:06:34.902 ************************************ 00:06:34.902 21:22:24 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.902 21:22:24 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:34.902 21:22:24 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:34.902 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.902 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.902 21:22:24 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.902 21:22:24 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:34.902 21:22:24 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:34.902 21:22:24 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.902 21:22:24 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.902 21:22:24 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.902 21:22:24 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.902 21:22:24 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.902 21:22:24 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:34.902 21:22:24 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:34.902 [2024-07-15 21:22:24.693358] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:34.902 [2024-07-15 21:22:24.693450] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975955 ] 00:06:35.163 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.163 [2024-07-15 21:22:24.753906] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.163 [2024-07-15 21:22:24.817460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.163 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.164 21:22:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:36.550 21:22:25 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.550 00:06:36.550 real 0m1.282s 00:06:36.550 user 0m1.193s 00:06:36.550 sys 0m0.100s 00:06:36.550 21:22:25 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.550 21:22:25 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:36.550 ************************************ 00:06:36.550 END TEST accel_fill 00:06:36.550 ************************************ 00:06:36.550 21:22:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.550 21:22:25 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:36.550 21:22:25 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:36.550 21:22:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.550 21:22:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.550 ************************************ 00:06:36.550 START TEST accel_copy_crc32c 00:06:36.550 ************************************ 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:36.550 [2024-07-15 21:22:26.048132] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:36.550 [2024-07-15 21:22:26.048196] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976304 ] 00:06:36.550 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.550 [2024-07-15 21:22:26.108113] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.550 [2024-07-15 21:22:26.172556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:36.550 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.551 21:22:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.492 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.492 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.492 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.753 00:06:37.753 real 0m1.281s 00:06:37.753 user 0m1.190s 00:06:37.753 sys 0m0.104s 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.753 21:22:27 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:37.753 ************************************ 00:06:37.753 END TEST accel_copy_crc32c 00:06:37.753 ************************************ 00:06:37.753 21:22:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.753 21:22:27 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:37.753 21:22:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:37.753 21:22:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.753 21:22:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.753 ************************************ 00:06:37.753 START TEST accel_copy_crc32c_C2 00:06:37.753 ************************************ 00:06:37.753 21:22:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:37.753 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.753 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:37.753 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.753 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.753 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:37.753 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:37.753 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.753 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.753 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.753 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.754 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.754 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.754 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.754 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:37.754 [2024-07-15 21:22:27.402904] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:37.754 [2024-07-15 21:22:27.402965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976565 ] 00:06:37.754 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.754 [2024-07-15 21:22:27.465195] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.754 [2024-07-15 21:22:27.535933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.015 21:22:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.035 00:06:39.035 real 0m1.291s 00:06:39.035 user 0m1.200s 00:06:39.035 sys 0m0.103s 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.035 21:22:28 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:39.035 ************************************ 00:06:39.035 END TEST accel_copy_crc32c_C2 00:06:39.035 ************************************ 00:06:39.035 21:22:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.035 21:22:28 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:39.035 21:22:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:39.035 21:22:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.035 21:22:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.035 ************************************ 00:06:39.035 START TEST accel_dualcast 00:06:39.035 ************************************ 00:06:39.035 21:22:28 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:39.035 21:22:28 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:39.035 21:22:28 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:39.035 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.035 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.035 21:22:28 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:39.035 21:22:28 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:39.035 21:22:28 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:39.035 21:22:28 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.035 21:22:28 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.035 21:22:28 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.035 21:22:28 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.035 21:22:28 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.035 21:22:28 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:39.035 21:22:28 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:39.035 [2024-07-15 21:22:28.749441] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:39.035 [2024-07-15 21:22:28.749551] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976744 ] 00:06:39.035 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.035 [2024-07-15 21:22:28.820966] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.296 [2024-07-15 21:22:28.888517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.296 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.297 21:22:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.237 21:22:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:40.238 21:22:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.238 00:06:40.238 real 0m1.299s 00:06:40.238 user 0m1.202s 00:06:40.238 sys 0m0.108s 00:06:40.238 21:22:30 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.238 21:22:30 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:40.238 ************************************ 00:06:40.238 END TEST accel_dualcast 00:06:40.238 ************************************ 00:06:40.497 21:22:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.497 21:22:30 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:40.497 21:22:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:40.497 21:22:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.497 21:22:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.497 ************************************ 00:06:40.497 START TEST accel_compare 00:06:40.497 ************************************ 00:06:40.497 21:22:30 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:40.497 21:22:30 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:40.497 21:22:30 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:40.497 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.497 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.497 21:22:30 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:40.497 21:22:30 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:40.497 21:22:30 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:40.497 21:22:30 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.497 21:22:30 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.497 21:22:30 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.497 21:22:30 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:40.498 [2024-07-15 21:22:30.124502] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:40.498 [2024-07-15 21:22:30.124589] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977045 ] 00:06:40.498 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.498 [2024-07-15 21:22:30.186049] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.498 [2024-07-15 21:22:30.250834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.498 21:22:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:41.880 21:22:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.880 00:06:41.880 real 0m1.284s 00:06:41.880 user 0m1.201s 00:06:41.880 sys 0m0.092s 00:06:41.880 21:22:31 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.880 21:22:31 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:41.880 ************************************ 00:06:41.880 END TEST accel_compare 00:06:41.880 ************************************ 00:06:41.880 21:22:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.880 21:22:31 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:41.880 21:22:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:41.880 21:22:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.880 21:22:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.880 ************************************ 00:06:41.880 START TEST accel_xor 00:06:41.880 ************************************ 00:06:41.880 21:22:31 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:41.880 [2024-07-15 21:22:31.465339] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:41.880 [2024-07-15 21:22:31.465432] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977398 ] 00:06:41.880 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.880 [2024-07-15 21:22:31.526639] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.880 [2024-07-15 21:22:31.592084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.880 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.881 21:22:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.264 21:22:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.265 00:06:43.265 real 0m1.286s 00:06:43.265 user 0m1.205s 00:06:43.265 sys 0m0.091s 00:06:43.265 21:22:32 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.265 21:22:32 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:43.265 ************************************ 00:06:43.265 END TEST accel_xor 00:06:43.265 ************************************ 00:06:43.265 21:22:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.265 21:22:32 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:43.265 21:22:32 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:43.265 21:22:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.265 21:22:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.265 ************************************ 00:06:43.265 START TEST accel_xor 00:06:43.265 ************************************ 00:06:43.265 21:22:32 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:43.265 [2024-07-15 21:22:32.820987] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:43.265 [2024-07-15 21:22:32.821049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977747 ] 00:06:43.265 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.265 [2024-07-15 21:22:32.882010] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.265 [2024-07-15 21:22:32.949717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.265 21:22:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:44.646 21:22:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.646 00:06:44.646 real 0m1.286s 00:06:44.646 user 0m1.196s 00:06:44.646 sys 0m0.101s 00:06:44.646 21:22:34 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.646 21:22:34 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:44.646 ************************************ 00:06:44.646 END TEST accel_xor 00:06:44.646 ************************************ 00:06:44.646 21:22:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.646 21:22:34 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:44.646 21:22:34 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:44.646 21:22:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.646 21:22:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.646 ************************************ 00:06:44.646 START TEST accel_dif_verify 00:06:44.646 ************************************ 00:06:44.646 21:22:34 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:44.646 [2024-07-15 21:22:34.178121] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:44.646 [2024-07-15 21:22:34.178275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978021 ] 00:06:44.646 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.646 [2024-07-15 21:22:34.239784] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.646 [2024-07-15 21:22:34.309910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.646 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.647 21:22:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.029 21:22:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.030 21:22:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:46.030 21:22:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.030 00:06:46.030 real 0m1.290s 00:06:46.030 user 0m1.203s 00:06:46.030 sys 0m0.099s 00:06:46.030 21:22:35 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.030 21:22:35 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:46.030 ************************************ 00:06:46.030 END TEST accel_dif_verify 00:06:46.030 ************************************ 00:06:46.030 21:22:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.030 21:22:35 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:46.030 21:22:35 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:46.030 21:22:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.030 21:22:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.030 ************************************ 00:06:46.030 START TEST accel_dif_generate 00:06:46.030 ************************************ 00:06:46.030 21:22:35 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:46.030 [2024-07-15 21:22:35.540450] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:46.030 [2024-07-15 21:22:35.540530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978207 ] 00:06:46.030 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.030 [2024-07-15 21:22:35.602321] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.030 [2024-07-15 21:22:35.670406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.030 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.031 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.031 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.031 21:22:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.031 21:22:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.031 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.031 21:22:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.416 21:22:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.417 21:22:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.417 21:22:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:47.417 21:22:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.417 00:06:47.417 real 0m1.288s 00:06:47.417 user 0m1.206s 00:06:47.417 sys 0m0.093s 00:06:47.417 21:22:36 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.417 21:22:36 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:47.417 ************************************ 00:06:47.417 END TEST accel_dif_generate 00:06:47.417 ************************************ 00:06:47.417 21:22:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.417 21:22:36 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:47.417 21:22:36 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:47.417 21:22:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.417 21:22:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.417 ************************************ 00:06:47.417 START TEST accel_dif_generate_copy 00:06:47.417 ************************************ 00:06:47.417 21:22:36 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:47.417 21:22:36 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:47.417 21:22:36 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:47.417 21:22:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:36 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:47.417 21:22:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:47.417 21:22:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:47.417 21:22:36 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.417 21:22:36 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.417 21:22:36 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.417 21:22:36 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.417 21:22:36 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.417 21:22:36 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:47.417 21:22:36 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:47.417 [2024-07-15 21:22:36.897189] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:47.417 [2024-07-15 21:22:36.897254] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978486 ] 00:06:47.417 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.417 [2024-07-15 21:22:36.957622] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.417 [2024-07-15 21:22:37.024128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.417 21:22:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.360 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.361 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.361 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:48.361 21:22:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.361 00:06:48.361 real 0m1.284s 00:06:48.361 user 0m1.199s 00:06:48.361 sys 0m0.097s 00:06:48.361 21:22:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.361 21:22:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:48.361 ************************************ 00:06:48.361 END TEST accel_dif_generate_copy 00:06:48.361 ************************************ 00:06:48.622 21:22:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.623 21:22:38 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:48.623 21:22:38 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.623 21:22:38 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:48.623 21:22:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.623 21:22:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.623 ************************************ 00:06:48.623 START TEST accel_comp 00:06:48.623 ************************************ 00:06:48.623 21:22:38 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:48.623 [2024-07-15 21:22:38.249204] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:48.623 [2024-07-15 21:22:38.249292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978835 ] 00:06:48.623 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.623 [2024-07-15 21:22:38.310792] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.623 [2024-07-15 21:22:38.375788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.623 21:22:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.009 21:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.009 21:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.009 21:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.009 21:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.009 21:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.009 21:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.009 21:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.009 21:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:50.010 21:22:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.010 00:06:50.010 real 0m1.287s 00:06:50.010 user 0m1.197s 00:06:50.010 sys 0m0.102s 00:06:50.010 21:22:39 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.010 21:22:39 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:50.010 ************************************ 00:06:50.010 END TEST accel_comp 00:06:50.010 ************************************ 00:06:50.010 21:22:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.010 21:22:39 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.010 21:22:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:50.010 21:22:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.010 21:22:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.010 ************************************ 00:06:50.010 START TEST accel_decomp 00:06:50.010 ************************************ 00:06:50.010 21:22:39 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:50.010 [2024-07-15 21:22:39.609344] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:50.010 [2024-07-15 21:22:39.609438] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979188 ] 00:06:50.010 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.010 [2024-07-15 21:22:39.671947] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.010 [2024-07-15 21:22:39.738452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.010 21:22:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:51.398 21:22:40 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.398 00:06:51.398 real 0m1.292s 00:06:51.398 user 0m1.196s 00:06:51.398 sys 0m0.108s 00:06:51.398 21:22:40 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.398 21:22:40 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:51.398 ************************************ 00:06:51.398 END TEST accel_decomp 00:06:51.398 ************************************ 00:06:51.398 21:22:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.398 21:22:40 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:51.398 21:22:40 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:51.398 21:22:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.398 21:22:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.398 ************************************ 00:06:51.398 START TEST accel_decomp_full 00:06:51.398 ************************************ 00:06:51.398 21:22:40 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:51.398 21:22:40 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:51.398 21:22:40 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:51.398 21:22:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.398 21:22:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.398 21:22:40 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:51.398 21:22:40 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:51.398 21:22:40 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:51.398 21:22:40 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.398 21:22:40 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.398 21:22:40 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.398 21:22:40 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.398 21:22:40 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.398 21:22:40 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:51.398 21:22:40 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:51.398 [2024-07-15 21:22:40.971200] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:51.398 [2024-07-15 21:22:40.971261] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979485 ] 00:06:51.398 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.398 [2024-07-15 21:22:41.030727] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.398 [2024-07-15 21:22:41.095242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.398 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.398 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.398 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.398 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.398 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.398 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.398 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.398 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.398 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.398 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.398 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.398 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.398 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:51.398 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.398 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.398 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.399 21:22:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:52.786 21:22:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.786 00:06:52.786 real 0m1.291s 00:06:52.786 user 0m1.203s 00:06:52.786 sys 0m0.102s 00:06:52.786 21:22:42 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.786 21:22:42 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:52.786 ************************************ 00:06:52.786 END TEST accel_decomp_full 00:06:52.786 ************************************ 00:06:52.786 21:22:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.786 21:22:42 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.786 21:22:42 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:52.786 21:22:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.786 21:22:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.786 ************************************ 00:06:52.786 START TEST accel_decomp_mcore 00:06:52.786 ************************************ 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:52.786 [2024-07-15 21:22:42.332094] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:52.786 [2024-07-15 21:22:42.332171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979662 ] 00:06:52.786 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.786 [2024-07-15 21:22:42.395162] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.786 [2024-07-15 21:22:42.467837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.786 [2024-07-15 21:22:42.467955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.786 [2024-07-15 21:22:42.468113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.786 [2024-07-15 21:22:42.468114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.786 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.787 21:22:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.169 00:06:54.169 real 0m1.302s 00:06:54.169 user 0m4.438s 00:06:54.169 sys 0m0.112s 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.169 21:22:43 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:54.169 ************************************ 00:06:54.169 END TEST accel_decomp_mcore 00:06:54.169 ************************************ 00:06:54.169 21:22:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.169 21:22:43 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:54.169 21:22:43 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:54.169 21:22:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.169 21:22:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.169 ************************************ 00:06:54.169 START TEST accel_decomp_full_mcore 00:06:54.169 ************************************ 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:54.169 [2024-07-15 21:22:43.709496] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:54.169 [2024-07-15 21:22:43.709560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979930 ] 00:06:54.169 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.169 [2024-07-15 21:22:43.771396] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.169 [2024-07-15 21:22:43.844160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.169 [2024-07-15 21:22:43.844384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.169 [2024-07-15 21:22:43.844384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.169 [2024-07-15 21:22:43.844233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.169 21:22:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.553 00:06:55.553 real 0m1.311s 00:06:55.553 user 0m4.476s 00:06:55.553 sys 0m0.109s 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.553 21:22:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:55.553 ************************************ 00:06:55.554 END TEST accel_decomp_full_mcore 00:06:55.554 ************************************ 00:06:55.554 21:22:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.554 21:22:45 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:55.554 21:22:45 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:55.554 21:22:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.554 21:22:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.554 ************************************ 00:06:55.554 START TEST accel_decomp_mthread 00:06:55.554 ************************************ 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:55.554 [2024-07-15 21:22:45.079210] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:55.554 [2024-07-15 21:22:45.079274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1980282 ] 00:06:55.554 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.554 [2024-07-15 21:22:45.141009] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.554 [2024-07-15 21:22:45.208197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.554 21:22:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.939 00:06:56.939 real 0m1.293s 00:06:56.939 user 0m1.201s 00:06:56.939 sys 0m0.104s 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.939 21:22:46 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:56.939 ************************************ 00:06:56.939 END TEST accel_decomp_mthread 00:06:56.939 ************************************ 00:06:56.939 21:22:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.939 21:22:46 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.939 21:22:46 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:56.939 21:22:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.939 21:22:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.939 ************************************ 00:06:56.939 START TEST accel_decomp_full_mthread 00:06:56.939 ************************************ 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:56.939 [2024-07-15 21:22:46.444334] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:56.939 [2024-07-15 21:22:46.444451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1980631 ] 00:06:56.939 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.939 [2024-07-15 21:22:46.515828] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.939 [2024-07-15 21:22:46.589913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.939 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.940 21:22:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.324 00:06:58.324 real 0m1.337s 00:06:58.324 user 0m1.229s 00:06:58.324 sys 0m0.120s 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.324 21:22:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:58.325 ************************************ 00:06:58.325 END TEST accel_decomp_full_mthread 00:06:58.325 ************************************ 00:06:58.325 21:22:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.325 21:22:47 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:58.325 21:22:47 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:58.325 21:22:47 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:58.325 21:22:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.325 21:22:47 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:58.325 21:22:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.325 21:22:47 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.325 21:22:47 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.325 21:22:47 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.325 21:22:47 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.325 21:22:47 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.325 21:22:47 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:58.325 21:22:47 accel -- accel/accel.sh@41 -- # jq -r . 00:06:58.325 ************************************ 00:06:58.325 START TEST accel_dif_functional_tests 00:06:58.325 ************************************ 00:06:58.325 21:22:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:58.325 [2024-07-15 21:22:47.868972] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:58.325 [2024-07-15 21:22:47.869018] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1980987 ] 00:06:58.325 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.325 [2024-07-15 21:22:47.929208] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.325 [2024-07-15 21:22:48.001069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.325 [2024-07-15 21:22:48.001202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.325 [2024-07-15 21:22:48.001205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.325 00:06:58.325 00:06:58.325 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.325 http://cunit.sourceforge.net/ 00:06:58.325 00:06:58.325 00:06:58.325 Suite: accel_dif 00:06:58.325 Test: verify: DIF generated, GUARD check ...passed 00:06:58.325 Test: verify: DIF generated, APPTAG check ...passed 00:06:58.325 Test: verify: DIF generated, REFTAG check ...passed 00:06:58.325 Test: verify: DIF not generated, GUARD check ...[2024-07-15 21:22:48.057365] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:58.325 passed 00:06:58.325 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 21:22:48.057412] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:58.325 passed 00:06:58.325 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 21:22:48.057432] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:58.325 passed 00:06:58.325 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:58.325 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 21:22:48.057480] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:58.325 passed 00:06:58.325 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:58.325 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:58.325 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:58.325 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 21:22:48.057589] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:58.325 passed 00:06:58.325 Test: verify copy: DIF generated, GUARD check ...passed 00:06:58.325 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:58.325 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:58.325 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 21:22:48.057710] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:58.325 passed 00:06:58.325 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 21:22:48.057734] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:58.325 passed 00:06:58.325 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 21:22:48.057757] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:58.325 passed 00:06:58.325 Test: generate copy: DIF generated, GUARD check ...passed 00:06:58.325 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:58.325 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:58.325 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:58.325 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:58.325 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:58.325 Test: generate copy: iovecs-len validate ...[2024-07-15 21:22:48.057945] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:58.325 passed 00:06:58.325 Test: generate copy: buffer alignment validate ...passed 00:06:58.325 00:06:58.325 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.325 suites 1 1 n/a 0 0 00:06:58.325 tests 26 26 26 0 0 00:06:58.325 asserts 115 115 115 0 n/a 00:06:58.325 00:06:58.325 Elapsed time = 0.000 seconds 00:06:58.586 00:06:58.586 real 0m0.349s 00:06:58.586 user 0m0.488s 00:06:58.586 sys 0m0.125s 00:06:58.586 21:22:48 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.586 21:22:48 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:58.586 ************************************ 00:06:58.586 END TEST accel_dif_functional_tests 00:06:58.586 ************************************ 00:06:58.586 21:22:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.586 00:06:58.586 real 0m29.899s 00:06:58.586 user 0m33.583s 00:06:58.586 sys 0m4.021s 00:06:58.586 21:22:48 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.586 21:22:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.586 ************************************ 00:06:58.586 END TEST accel 00:06:58.586 ************************************ 00:06:58.586 21:22:48 -- common/autotest_common.sh@1142 -- # return 0 00:06:58.586 21:22:48 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:58.586 21:22:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.586 21:22:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.586 21:22:48 -- common/autotest_common.sh@10 -- # set +x 00:06:58.586 ************************************ 00:06:58.586 START TEST accel_rpc 00:06:58.586 ************************************ 00:06:58.586 21:22:48 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:58.587 * Looking for test storage... 00:06:58.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:58.587 21:22:48 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:58.587 21:22:48 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1981053 00:06:58.587 21:22:48 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1981053 00:06:58.848 21:22:48 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:58.848 21:22:48 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1981053 ']' 00:06:58.848 21:22:48 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.848 21:22:48 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.848 21:22:48 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.848 21:22:48 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.848 21:22:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.848 [2024-07-15 21:22:48.454501] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:58.848 [2024-07-15 21:22:48.454551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1981053 ] 00:06:58.848 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.848 [2024-07-15 21:22:48.516581] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.848 [2024-07-15 21:22:48.581262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.848 21:22:48 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.848 21:22:48 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:58.848 21:22:48 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:58.848 21:22:48 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:58.848 21:22:48 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:58.848 21:22:48 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:58.848 21:22:48 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:58.848 21:22:48 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.848 21:22:48 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.848 21:22:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.848 ************************************ 00:06:58.848 START TEST accel_assign_opcode 00:06:58.848 ************************************ 00:06:58.848 21:22:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:58.848 21:22:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:58.848 21:22:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.848 21:22:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:58.848 [2024-07-15 21:22:48.645694] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:58.848 21:22:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.848 21:22:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:58.848 21:22:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.848 21:22:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.109 [2024-07-15 21:22:48.657722] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:59.109 21:22:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.109 21:22:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:59.109 21:22:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.109 21:22:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.109 21:22:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.109 21:22:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:59.109 21:22:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:59.109 21:22:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:59.109 21:22:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.109 21:22:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.109 21:22:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.109 software 00:06:59.109 00:06:59.109 real 0m0.211s 00:06:59.109 user 0m0.052s 00:06:59.109 sys 0m0.006s 00:06:59.109 21:22:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.109 21:22:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:59.109 ************************************ 00:06:59.109 END TEST accel_assign_opcode 00:06:59.109 ************************************ 00:06:59.109 21:22:48 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:59.109 21:22:48 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1981053 00:06:59.109 21:22:48 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1981053 ']' 00:06:59.109 21:22:48 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1981053 00:06:59.109 21:22:48 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:59.109 21:22:48 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.109 21:22:48 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1981053 00:06:59.370 21:22:48 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.371 21:22:48 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.371 21:22:48 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1981053' 00:06:59.371 killing process with pid 1981053 00:06:59.371 21:22:48 accel_rpc -- common/autotest_common.sh@967 -- # kill 1981053 00:06:59.371 21:22:48 accel_rpc -- common/autotest_common.sh@972 -- # wait 1981053 00:06:59.371 00:06:59.371 real 0m0.864s 00:06:59.371 user 0m0.884s 00:06:59.371 sys 0m0.363s 00:06:59.371 21:22:49 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.371 21:22:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.371 ************************************ 00:06:59.371 END TEST accel_rpc 00:06:59.371 ************************************ 00:06:59.631 21:22:49 -- common/autotest_common.sh@1142 -- # return 0 00:06:59.632 21:22:49 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:59.632 21:22:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.632 21:22:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.632 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:06:59.632 ************************************ 00:06:59.632 START TEST app_cmdline 00:06:59.632 ************************************ 00:06:59.632 21:22:49 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:59.632 * Looking for test storage... 00:06:59.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:59.632 21:22:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:59.632 21:22:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1981380 00:06:59.632 21:22:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1981380 00:06:59.632 21:22:49 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:59.632 21:22:49 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1981380 ']' 00:06:59.632 21:22:49 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.632 21:22:49 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.632 21:22:49 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.632 21:22:49 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.632 21:22:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.632 [2024-07-15 21:22:49.384349] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:06:59.632 [2024-07-15 21:22:49.384404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1981380 ] 00:06:59.632 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.892 [2024-07-15 21:22:49.443303] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.892 [2024-07-15 21:22:49.508274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.498 21:22:50 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.498 21:22:50 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:00.498 21:22:50 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:00.498 { 00:07:00.498 "version": "SPDK v24.09-pre git sha1 b26ca8289", 00:07:00.498 "fields": { 00:07:00.498 "major": 24, 00:07:00.498 "minor": 9, 00:07:00.498 "patch": 0, 00:07:00.498 "suffix": "-pre", 00:07:00.498 "commit": "b26ca8289" 00:07:00.498 } 00:07:00.498 } 00:07:00.795 21:22:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:00.795 21:22:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:00.795 21:22:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:00.795 21:22:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:00.795 21:22:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:00.795 21:22:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:00.795 21:22:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.795 21:22:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:00.795 21:22:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:00.795 21:22:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.795 request: 00:07:00.795 { 00:07:00.795 "method": "env_dpdk_get_mem_stats", 00:07:00.795 "req_id": 1 00:07:00.795 } 00:07:00.795 Got JSON-RPC error response 00:07:00.795 response: 00:07:00.795 { 00:07:00.795 "code": -32601, 00:07:00.795 "message": "Method not found" 00:07:00.795 } 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.795 21:22:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1981380 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1981380 ']' 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1981380 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1981380 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1981380' 00:07:00.795 killing process with pid 1981380 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@967 -- # kill 1981380 00:07:00.795 21:22:50 app_cmdline -- common/autotest_common.sh@972 -- # wait 1981380 00:07:01.057 00:07:01.057 real 0m1.553s 00:07:01.057 user 0m1.888s 00:07:01.057 sys 0m0.382s 00:07:01.057 21:22:50 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.057 21:22:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.057 ************************************ 00:07:01.057 END TEST app_cmdline 00:07:01.057 ************************************ 00:07:01.057 21:22:50 -- common/autotest_common.sh@1142 -- # return 0 00:07:01.057 21:22:50 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:01.057 21:22:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:01.057 21:22:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.057 21:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:01.057 ************************************ 00:07:01.057 START TEST version 00:07:01.057 ************************************ 00:07:01.057 21:22:50 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:01.319 * Looking for test storage... 00:07:01.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:01.319 21:22:50 version -- app/version.sh@17 -- # get_header_version major 00:07:01.319 21:22:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.319 21:22:50 version -- app/version.sh@14 -- # cut -f2 00:07:01.319 21:22:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.319 21:22:50 version -- app/version.sh@17 -- # major=24 00:07:01.319 21:22:50 version -- app/version.sh@18 -- # get_header_version minor 00:07:01.319 21:22:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.319 21:22:50 version -- app/version.sh@14 -- # cut -f2 00:07:01.319 21:22:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.319 21:22:50 version -- app/version.sh@18 -- # minor=9 00:07:01.319 21:22:50 version -- app/version.sh@19 -- # get_header_version patch 00:07:01.319 21:22:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.319 21:22:50 version -- app/version.sh@14 -- # cut -f2 00:07:01.319 21:22:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.319 21:22:50 version -- app/version.sh@19 -- # patch=0 00:07:01.319 21:22:50 version -- app/version.sh@20 -- # get_header_version suffix 00:07:01.319 21:22:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:01.319 21:22:50 version -- app/version.sh@14 -- # cut -f2 00:07:01.319 21:22:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:01.319 21:22:50 version -- app/version.sh@20 -- # suffix=-pre 00:07:01.319 21:22:50 version -- app/version.sh@22 -- # version=24.9 00:07:01.319 21:22:50 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:01.319 21:22:50 version -- app/version.sh@28 -- # version=24.9rc0 00:07:01.319 21:22:50 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:01.319 21:22:50 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:01.319 21:22:51 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:01.319 21:22:51 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:01.319 00:07:01.319 real 0m0.174s 00:07:01.319 user 0m0.086s 00:07:01.319 sys 0m0.127s 00:07:01.319 21:22:51 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.319 21:22:51 version -- common/autotest_common.sh@10 -- # set +x 00:07:01.319 ************************************ 00:07:01.319 END TEST version 00:07:01.319 ************************************ 00:07:01.319 21:22:51 -- common/autotest_common.sh@1142 -- # return 0 00:07:01.319 21:22:51 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:01.319 21:22:51 -- spdk/autotest.sh@198 -- # uname -s 00:07:01.319 21:22:51 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:01.319 21:22:51 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:01.319 21:22:51 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:01.319 21:22:51 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:01.319 21:22:51 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:01.319 21:22:51 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:01.319 21:22:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:01.319 21:22:51 -- common/autotest_common.sh@10 -- # set +x 00:07:01.319 21:22:51 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:01.319 21:22:51 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:01.319 21:22:51 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:01.319 21:22:51 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:01.319 21:22:51 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:01.319 21:22:51 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:01.319 21:22:51 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:01.319 21:22:51 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:01.319 21:22:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.319 21:22:51 -- common/autotest_common.sh@10 -- # set +x 00:07:01.581 ************************************ 00:07:01.581 START TEST nvmf_tcp 00:07:01.581 ************************************ 00:07:01.581 21:22:51 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:01.581 * Looking for test storage... 00:07:01.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.581 21:22:51 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.581 21:22:51 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.581 21:22:51 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.581 21:22:51 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.581 21:22:51 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.581 21:22:51 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.581 21:22:51 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:01.581 21:22:51 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:01.581 21:22:51 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:01.581 21:22:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:01.581 21:22:51 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:01.581 21:22:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:01.581 21:22:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.581 21:22:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.581 ************************************ 00:07:01.581 START TEST nvmf_example 00:07:01.581 ************************************ 00:07:01.581 21:22:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:01.844 * Looking for test storage... 00:07:01.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:01.844 21:22:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:08.428 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:08.428 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:08.428 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:08.428 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:08.428 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.429 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.429 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:08.429 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:08.429 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.429 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.688 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.688 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.688 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:08.688 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.688 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.688 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.688 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:08.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:07:08.688 00:07:08.688 --- 10.0.0.2 ping statistics --- 00:07:08.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.688 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:07:08.688 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.459 ms 00:07:08.948 00:07:08.948 --- 10.0.0.1 ping statistics --- 00:07:08.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.948 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1985549 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1985549 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1985549 ']' 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.948 21:22:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.948 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:09.888 21:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:09.888 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.888 Initializing NVMe Controllers 00:07:19.888 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:19.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:19.888 Initialization complete. Launching workers. 00:07:19.888 ======================================================== 00:07:19.888 Latency(us) 00:07:19.888 Device Information : IOPS MiB/s Average min max 00:07:19.888 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16296.40 63.66 3926.80 799.95 20113.17 00:07:19.888 ======================================================== 00:07:19.888 Total : 16296.40 63.66 3926.80 799.95 20113.17 00:07:19.888 00:07:19.888 21:23:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:19.888 21:23:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:19.888 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:19.888 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:19.888 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:19.888 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:19.888 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:19.888 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:19.888 rmmod nvme_tcp 00:07:19.888 rmmod nvme_fabrics 00:07:19.888 rmmod nvme_keyring 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1985549 ']' 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1985549 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1985549 ']' 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1985549 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1985549 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1985549' 00:07:20.149 killing process with pid 1985549 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1985549 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1985549 00:07:20.149 nvmf threads initialize successfully 00:07:20.149 bdev subsystem init successfully 00:07:20.149 created a nvmf target service 00:07:20.149 create targets's poll groups done 00:07:20.149 all subsystems of target started 00:07:20.149 nvmf target is running 00:07:20.149 all subsystems of target stopped 00:07:20.149 destroy targets's poll groups done 00:07:20.149 destroyed the nvmf target service 00:07:20.149 bdev subsystem finish successfully 00:07:20.149 nvmf threads destroy successfully 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.149 21:23:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.731 21:23:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:22.731 21:23:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:22.731 21:23:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:22.731 21:23:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.731 00:07:22.731 real 0m20.657s 00:07:22.731 user 0m46.217s 00:07:22.731 sys 0m6.218s 00:07:22.731 21:23:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.731 21:23:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.731 ************************************ 00:07:22.731 END TEST nvmf_example 00:07:22.731 ************************************ 00:07:22.731 21:23:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:22.731 21:23:12 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:22.731 21:23:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.731 21:23:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.731 21:23:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.731 ************************************ 00:07:22.731 START TEST nvmf_filesystem 00:07:22.731 ************************************ 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:22.731 * Looking for test storage... 00:07:22.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:22.731 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:22.732 #define SPDK_CONFIG_H 00:07:22.732 #define SPDK_CONFIG_APPS 1 00:07:22.732 #define SPDK_CONFIG_ARCH native 00:07:22.732 #undef SPDK_CONFIG_ASAN 00:07:22.732 #undef SPDK_CONFIG_AVAHI 00:07:22.732 #undef SPDK_CONFIG_CET 00:07:22.732 #define SPDK_CONFIG_COVERAGE 1 00:07:22.732 #define SPDK_CONFIG_CROSS_PREFIX 00:07:22.732 #undef SPDK_CONFIG_CRYPTO 00:07:22.732 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:22.732 #undef SPDK_CONFIG_CUSTOMOCF 00:07:22.732 #undef SPDK_CONFIG_DAOS 00:07:22.732 #define SPDK_CONFIG_DAOS_DIR 00:07:22.732 #define SPDK_CONFIG_DEBUG 1 00:07:22.732 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:22.732 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:22.732 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:22.732 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:22.732 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:22.732 #undef SPDK_CONFIG_DPDK_UADK 00:07:22.732 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:22.732 #define SPDK_CONFIG_EXAMPLES 1 00:07:22.732 #undef SPDK_CONFIG_FC 00:07:22.732 #define SPDK_CONFIG_FC_PATH 00:07:22.732 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:22.732 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:22.732 #undef SPDK_CONFIG_FUSE 00:07:22.732 #undef SPDK_CONFIG_FUZZER 00:07:22.732 #define SPDK_CONFIG_FUZZER_LIB 00:07:22.732 #undef SPDK_CONFIG_GOLANG 00:07:22.732 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:22.732 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:22.732 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:22.732 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:22.732 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:22.732 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:22.732 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:22.732 #define SPDK_CONFIG_IDXD 1 00:07:22.732 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:22.732 #undef SPDK_CONFIG_IPSEC_MB 00:07:22.732 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:22.732 #define SPDK_CONFIG_ISAL 1 00:07:22.732 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:22.732 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:22.732 #define SPDK_CONFIG_LIBDIR 00:07:22.732 #undef SPDK_CONFIG_LTO 00:07:22.732 #define SPDK_CONFIG_MAX_LCORES 128 00:07:22.732 #define SPDK_CONFIG_NVME_CUSE 1 00:07:22.732 #undef SPDK_CONFIG_OCF 00:07:22.732 #define SPDK_CONFIG_OCF_PATH 00:07:22.732 #define SPDK_CONFIG_OPENSSL_PATH 00:07:22.732 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:22.732 #define SPDK_CONFIG_PGO_DIR 00:07:22.732 #undef SPDK_CONFIG_PGO_USE 00:07:22.732 #define SPDK_CONFIG_PREFIX /usr/local 00:07:22.732 #undef SPDK_CONFIG_RAID5F 00:07:22.732 #undef SPDK_CONFIG_RBD 00:07:22.732 #define SPDK_CONFIG_RDMA 1 00:07:22.732 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:22.732 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:22.732 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:22.732 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:22.732 #define SPDK_CONFIG_SHARED 1 00:07:22.732 #undef SPDK_CONFIG_SMA 00:07:22.732 #define SPDK_CONFIG_TESTS 1 00:07:22.732 #undef SPDK_CONFIG_TSAN 00:07:22.732 #define SPDK_CONFIG_UBLK 1 00:07:22.732 #define SPDK_CONFIG_UBSAN 1 00:07:22.732 #undef SPDK_CONFIG_UNIT_TESTS 00:07:22.732 #undef SPDK_CONFIG_URING 00:07:22.732 #define SPDK_CONFIG_URING_PATH 00:07:22.732 #undef SPDK_CONFIG_URING_ZNS 00:07:22.732 #undef SPDK_CONFIG_USDT 00:07:22.732 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:22.732 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:22.732 #define SPDK_CONFIG_VFIO_USER 1 00:07:22.732 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:22.732 #define SPDK_CONFIG_VHOST 1 00:07:22.732 #define SPDK_CONFIG_VIRTIO 1 00:07:22.732 #undef SPDK_CONFIG_VTUNE 00:07:22.732 #define SPDK_CONFIG_VTUNE_DIR 00:07:22.732 #define SPDK_CONFIG_WERROR 1 00:07:22.732 #define SPDK_CONFIG_WPDK_DIR 00:07:22.732 #undef SPDK_CONFIG_XNVME 00:07:22.732 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:22.732 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:22.733 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1988351 ]] 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1988351 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.rZZagj 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.rZZagj/tests/target /tmp/spdk.rZZagj 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954236928 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330192896 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118659141632 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129371013120 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10711871488 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680796160 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864503296 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874202624 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9699328 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684040192 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1466368 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937097216 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937101312 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:22.734 * Looking for test storage... 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118659141632 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12926464000 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.734 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:22.735 21:23:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:29.322 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:29.322 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:29.322 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.322 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:29.323 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.323 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.584 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.584 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.584 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:29.584 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.584 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.584 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:29.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:07:29.846 00:07:29.846 --- 10.0.0.2 ping statistics --- 00:07:29.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.846 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:07:29.846 00:07:29.846 --- 10.0.0.1 ping statistics --- 00:07:29.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.846 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.846 ************************************ 00:07:29.846 START TEST nvmf_filesystem_no_in_capsule 00:07:29.846 ************************************ 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1991974 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1991974 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1991974 ']' 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.846 21:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.846 [2024-07-15 21:23:19.566026] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:07:29.846 [2024-07-15 21:23:19.566083] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.846 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.846 [2024-07-15 21:23:19.637300] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.107 [2024-07-15 21:23:19.716036] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.107 [2024-07-15 21:23:19.716072] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.107 [2024-07-15 21:23:19.716080] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.107 [2024-07-15 21:23:19.716086] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.107 [2024-07-15 21:23:19.716092] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.107 [2024-07-15 21:23:19.716172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.107 [2024-07-15 21:23:19.716226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.107 [2024-07-15 21:23:19.716540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.107 [2024-07-15 21:23:19.716541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.679 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:30.679 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:30.679 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:30.679 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:30.679 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.679 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.679 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:30.679 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:30.679 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.679 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.679 [2024-07-15 21:23:20.399804] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.679 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.679 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:30.679 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.679 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.940 Malloc1 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.940 [2024-07-15 21:23:20.529390] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:30.940 { 00:07:30.940 "name": "Malloc1", 00:07:30.940 "aliases": [ 00:07:30.940 "223dfeba-7204-4f5f-81c4-5736953848f1" 00:07:30.940 ], 00:07:30.940 "product_name": "Malloc disk", 00:07:30.940 "block_size": 512, 00:07:30.940 "num_blocks": 1048576, 00:07:30.940 "uuid": "223dfeba-7204-4f5f-81c4-5736953848f1", 00:07:30.940 "assigned_rate_limits": { 00:07:30.940 "rw_ios_per_sec": 0, 00:07:30.940 "rw_mbytes_per_sec": 0, 00:07:30.940 "r_mbytes_per_sec": 0, 00:07:30.940 "w_mbytes_per_sec": 0 00:07:30.940 }, 00:07:30.940 "claimed": true, 00:07:30.940 "claim_type": "exclusive_write", 00:07:30.940 "zoned": false, 00:07:30.940 "supported_io_types": { 00:07:30.940 "read": true, 00:07:30.940 "write": true, 00:07:30.940 "unmap": true, 00:07:30.940 "flush": true, 00:07:30.940 "reset": true, 00:07:30.940 "nvme_admin": false, 00:07:30.940 "nvme_io": false, 00:07:30.940 "nvme_io_md": false, 00:07:30.940 "write_zeroes": true, 00:07:30.940 "zcopy": true, 00:07:30.940 "get_zone_info": false, 00:07:30.940 "zone_management": false, 00:07:30.940 "zone_append": false, 00:07:30.940 "compare": false, 00:07:30.940 "compare_and_write": false, 00:07:30.940 "abort": true, 00:07:30.940 "seek_hole": false, 00:07:30.940 "seek_data": false, 00:07:30.940 "copy": true, 00:07:30.940 "nvme_iov_md": false 00:07:30.940 }, 00:07:30.940 "memory_domains": [ 00:07:30.940 { 00:07:30.940 "dma_device_id": "system", 00:07:30.940 "dma_device_type": 1 00:07:30.940 }, 00:07:30.940 { 00:07:30.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.940 "dma_device_type": 2 00:07:30.940 } 00:07:30.940 ], 00:07:30.940 "driver_specific": {} 00:07:30.940 } 00:07:30.940 ]' 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:30.940 21:23:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:32.875 21:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:32.875 21:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:32.875 21:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:32.875 21:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:32.875 21:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:34.787 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:35.047 21:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:35.989 21:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:35.989 21:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:35.989 21:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:35.989 21:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.989 21:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.989 ************************************ 00:07:35.989 START TEST filesystem_ext4 00:07:35.989 ************************************ 00:07:35.989 21:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:35.989 21:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:35.989 21:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:35.989 21:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:35.989 21:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:35.989 21:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:35.989 21:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:35.989 21:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:35.989 21:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:35.989 21:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:35.989 21:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:35.989 mke2fs 1.46.5 (30-Dec-2021) 00:07:35.989 Discarding device blocks: 0/522240 done 00:07:35.989 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:35.989 Filesystem UUID: 554327ad-3d26-4776-afe6-5e54d6d9860c 00:07:35.989 Superblock backups stored on blocks: 00:07:35.989 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:35.989 00:07:35.989 Allocating group tables: 0/64 done 00:07:35.989 Writing inode tables: 0/64 done 00:07:36.250 Creating journal (8192 blocks): done 00:07:37.190 Writing superblocks and filesystem accounting information: 0/64 done 00:07:37.190 00:07:37.190 21:23:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:37.190 21:23:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:37.190 21:23:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:37.190 21:23:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:37.190 21:23:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:37.190 21:23:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1991974 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:37.452 00:07:37.452 real 0m1.328s 00:07:37.452 user 0m0.023s 00:07:37.452 sys 0m0.073s 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:37.452 ************************************ 00:07:37.452 END TEST filesystem_ext4 00:07:37.452 ************************************ 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.452 ************************************ 00:07:37.452 START TEST filesystem_btrfs 00:07:37.452 ************************************ 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:37.452 btrfs-progs v6.6.2 00:07:37.452 See https://btrfs.readthedocs.io for more information. 00:07:37.452 00:07:37.452 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:37.452 NOTE: several default settings have changed in version 5.15, please make sure 00:07:37.452 this does not affect your deployments: 00:07:37.452 - DUP for metadata (-m dup) 00:07:37.452 - enabled no-holes (-O no-holes) 00:07:37.452 - enabled free-space-tree (-R free-space-tree) 00:07:37.452 00:07:37.452 Label: (null) 00:07:37.452 UUID: 7af0f06b-b64e-46af-aff2-e6c4fedf6a17 00:07:37.452 Node size: 16384 00:07:37.452 Sector size: 4096 00:07:37.452 Filesystem size: 510.00MiB 00:07:37.452 Block group profiles: 00:07:37.452 Data: single 8.00MiB 00:07:37.452 Metadata: DUP 32.00MiB 00:07:37.452 System: DUP 8.00MiB 00:07:37.452 SSD detected: yes 00:07:37.452 Zoned device: no 00:07:37.452 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:37.452 Runtime features: free-space-tree 00:07:37.452 Checksum: crc32c 00:07:37.452 Number of devices: 1 00:07:37.452 Devices: 00:07:37.452 ID SIZE PATH 00:07:37.452 1 510.00MiB /dev/nvme0n1p1 00:07:37.452 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:37.452 21:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1991974 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:38.836 00:07:38.836 real 0m1.203s 00:07:38.836 user 0m0.040s 00:07:38.836 sys 0m0.118s 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:38.836 ************************************ 00:07:38.836 END TEST filesystem_btrfs 00:07:38.836 ************************************ 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.836 ************************************ 00:07:38.836 START TEST filesystem_xfs 00:07:38.836 ************************************ 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:38.836 21:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:38.836 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:38.836 = sectsz=512 attr=2, projid32bit=1 00:07:38.836 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:38.836 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:38.836 data = bsize=4096 blocks=130560, imaxpct=25 00:07:38.836 = sunit=0 swidth=0 blks 00:07:38.836 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:38.836 log =internal log bsize=4096 blocks=16384, version=2 00:07:38.836 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:38.836 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:39.777 Discarding blocks...Done. 00:07:39.777 21:23:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:39.777 21:23:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.699 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.699 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:41.699 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.699 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:41.699 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:41.699 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.699 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1991974 00:07:41.699 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.699 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.699 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.699 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.699 00:07:41.699 real 0m2.847s 00:07:41.699 user 0m0.021s 00:07:41.699 sys 0m0.081s 00:07:41.699 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.699 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:41.699 ************************************ 00:07:41.699 END TEST filesystem_xfs 00:07:41.699 ************************************ 00:07:41.699 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:41.699 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:41.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1991974 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1991974 ']' 00:07:41.960 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1991974 00:07:42.221 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:42.221 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:42.221 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1991974 00:07:42.221 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:42.221 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:42.221 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1991974' 00:07:42.221 killing process with pid 1991974 00:07:42.221 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1991974 00:07:42.221 21:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1991974 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:42.483 00:07:42.483 real 0m12.548s 00:07:42.483 user 0m49.439s 00:07:42.483 sys 0m1.174s 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.483 ************************************ 00:07:42.483 END TEST nvmf_filesystem_no_in_capsule 00:07:42.483 ************************************ 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.483 ************************************ 00:07:42.483 START TEST nvmf_filesystem_in_capsule 00:07:42.483 ************************************ 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1994726 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1994726 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1994726 ']' 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.483 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.483 [2024-07-15 21:23:32.196952] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:07:42.483 [2024-07-15 21:23:32.197002] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.483 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.483 [2024-07-15 21:23:32.265937] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.745 [2024-07-15 21:23:32.340098] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.745 [2024-07-15 21:23:32.340142] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.745 [2024-07-15 21:23:32.340149] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.745 [2024-07-15 21:23:32.340155] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.745 [2024-07-15 21:23:32.340161] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.745 [2024-07-15 21:23:32.340242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.745 [2024-07-15 21:23:32.340374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.745 [2024-07-15 21:23:32.340539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.745 [2024-07-15 21:23:32.340540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.322 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.322 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:43.322 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:43.322 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.322 21:23:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.322 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.322 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:43.322 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:43.322 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.322 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.322 [2024-07-15 21:23:33.021822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.322 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.322 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:43.322 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.322 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.322 Malloc1 00:07:43.322 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.322 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:43.322 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.322 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.582 [2024-07-15 21:23:33.148392] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:43.582 { 00:07:43.582 "name": "Malloc1", 00:07:43.582 "aliases": [ 00:07:43.582 "e9f7b5c5-e18a-401f-a2c5-bc1ae88b169f" 00:07:43.582 ], 00:07:43.582 "product_name": "Malloc disk", 00:07:43.582 "block_size": 512, 00:07:43.582 "num_blocks": 1048576, 00:07:43.582 "uuid": "e9f7b5c5-e18a-401f-a2c5-bc1ae88b169f", 00:07:43.582 "assigned_rate_limits": { 00:07:43.582 "rw_ios_per_sec": 0, 00:07:43.582 "rw_mbytes_per_sec": 0, 00:07:43.582 "r_mbytes_per_sec": 0, 00:07:43.582 "w_mbytes_per_sec": 0 00:07:43.582 }, 00:07:43.582 "claimed": true, 00:07:43.582 "claim_type": "exclusive_write", 00:07:43.582 "zoned": false, 00:07:43.582 "supported_io_types": { 00:07:43.582 "read": true, 00:07:43.582 "write": true, 00:07:43.582 "unmap": true, 00:07:43.582 "flush": true, 00:07:43.582 "reset": true, 00:07:43.582 "nvme_admin": false, 00:07:43.582 "nvme_io": false, 00:07:43.582 "nvme_io_md": false, 00:07:43.582 "write_zeroes": true, 00:07:43.582 "zcopy": true, 00:07:43.582 "get_zone_info": false, 00:07:43.582 "zone_management": false, 00:07:43.582 "zone_append": false, 00:07:43.582 "compare": false, 00:07:43.582 "compare_and_write": false, 00:07:43.582 "abort": true, 00:07:43.582 "seek_hole": false, 00:07:43.582 "seek_data": false, 00:07:43.582 "copy": true, 00:07:43.582 "nvme_iov_md": false 00:07:43.582 }, 00:07:43.582 "memory_domains": [ 00:07:43.582 { 00:07:43.582 "dma_device_id": "system", 00:07:43.582 "dma_device_type": 1 00:07:43.582 }, 00:07:43.582 { 00:07:43.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.582 "dma_device_type": 2 00:07:43.582 } 00:07:43.582 ], 00:07:43.582 "driver_specific": {} 00:07:43.582 } 00:07:43.582 ]' 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:43.582 21:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:44.981 21:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:44.981 21:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:44.981 21:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:44.981 21:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:44.981 21:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:47.520 21:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:47.520 21:23:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:48.090 21:23:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:49.032 21:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:49.032 21:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:49.032 21:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:49.032 21:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.032 21:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.032 ************************************ 00:07:49.032 START TEST filesystem_in_capsule_ext4 00:07:49.032 ************************************ 00:07:49.032 21:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:49.032 21:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:49.032 21:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.032 21:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:49.033 21:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:49.033 21:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:49.033 21:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:49.033 21:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:49.033 21:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:49.033 21:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:49.033 21:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:49.033 mke2fs 1.46.5 (30-Dec-2021) 00:07:49.293 Discarding device blocks: 0/522240 done 00:07:49.293 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:49.293 Filesystem UUID: 9bf2feb5-d6a5-42ea-97fd-38b23e6eca3a 00:07:49.293 Superblock backups stored on blocks: 00:07:49.293 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:49.293 00:07:49.293 Allocating group tables: 0/64 done 00:07:49.293 Writing inode tables: 0/64 done 00:07:51.205 Creating journal (8192 blocks): done 00:07:51.205 Writing superblocks and filesystem accounting information: 0/64 done 00:07:51.205 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1994726 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:51.205 00:07:51.205 real 0m2.027s 00:07:51.205 user 0m0.019s 00:07:51.205 sys 0m0.078s 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 ************************************ 00:07:51.205 END TEST filesystem_in_capsule_ext4 00:07:51.205 ************************************ 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.205 ************************************ 00:07:51.205 START TEST filesystem_in_capsule_btrfs 00:07:51.205 ************************************ 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:51.205 21:23:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:51.465 btrfs-progs v6.6.2 00:07:51.465 See https://btrfs.readthedocs.io for more information. 00:07:51.465 00:07:51.465 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:51.465 NOTE: several default settings have changed in version 5.15, please make sure 00:07:51.465 this does not affect your deployments: 00:07:51.465 - DUP for metadata (-m dup) 00:07:51.465 - enabled no-holes (-O no-holes) 00:07:51.465 - enabled free-space-tree (-R free-space-tree) 00:07:51.465 00:07:51.465 Label: (null) 00:07:51.465 UUID: f4403d54-52e0-4c45-8110-59d214b48afd 00:07:51.465 Node size: 16384 00:07:51.465 Sector size: 4096 00:07:51.465 Filesystem size: 510.00MiB 00:07:51.465 Block group profiles: 00:07:51.465 Data: single 8.00MiB 00:07:51.465 Metadata: DUP 32.00MiB 00:07:51.466 System: DUP 8.00MiB 00:07:51.466 SSD detected: yes 00:07:51.466 Zoned device: no 00:07:51.466 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:51.466 Runtime features: free-space-tree 00:07:51.466 Checksum: crc32c 00:07:51.466 Number of devices: 1 00:07:51.466 Devices: 00:07:51.466 ID SIZE PATH 00:07:51.466 1 510.00MiB /dev/nvme0n1p1 00:07:51.466 00:07:51.466 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:51.466 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:52.036 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:52.036 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:52.036 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:52.036 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:52.036 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:52.036 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:52.036 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1994726 00:07:52.036 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:52.036 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:52.036 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:52.036 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:52.036 00:07:52.036 real 0m0.721s 00:07:52.037 user 0m0.028s 00:07:52.037 sys 0m0.131s 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:52.037 ************************************ 00:07:52.037 END TEST filesystem_in_capsule_btrfs 00:07:52.037 ************************************ 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.037 ************************************ 00:07:52.037 START TEST filesystem_in_capsule_xfs 00:07:52.037 ************************************ 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:52.037 21:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:52.037 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:52.037 = sectsz=512 attr=2, projid32bit=1 00:07:52.037 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:52.037 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:52.037 data = bsize=4096 blocks=130560, imaxpct=25 00:07:52.037 = sunit=0 swidth=0 blks 00:07:52.037 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:52.037 log =internal log bsize=4096 blocks=16384, version=2 00:07:52.037 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:52.037 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:52.979 Discarding blocks...Done. 00:07:52.979 21:23:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:52.979 21:23:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:55.596 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:55.596 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:55.596 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:55.596 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:55.596 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:55.596 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:55.596 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1994726 00:07:55.596 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:55.596 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:55.596 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:55.596 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:55.596 00:07:55.596 real 0m3.484s 00:07:55.596 user 0m0.030s 00:07:55.596 sys 0m0.076s 00:07:55.596 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.596 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:55.596 ************************************ 00:07:55.596 END TEST filesystem_in_capsule_xfs 00:07:55.596 ************************************ 00:07:55.596 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:55.596 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:55.856 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:56.428 21:23:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:56.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1994726 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1994726 ']' 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1994726 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1994726 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1994726' 00:07:56.428 killing process with pid 1994726 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1994726 00:07:56.428 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1994726 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:56.690 00:07:56.690 real 0m14.242s 00:07:56.690 user 0m56.151s 00:07:56.690 sys 0m1.257s 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.690 ************************************ 00:07:56.690 END TEST nvmf_filesystem_in_capsule 00:07:56.690 ************************************ 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:56.690 rmmod nvme_tcp 00:07:56.690 rmmod nvme_fabrics 00:07:56.690 rmmod nvme_keyring 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.690 21:23:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.236 21:23:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:59.236 00:07:59.236 real 0m36.470s 00:07:59.236 user 1m47.779s 00:07:59.236 sys 0m7.846s 00:07:59.236 21:23:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.236 21:23:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.236 ************************************ 00:07:59.236 END TEST nvmf_filesystem 00:07:59.236 ************************************ 00:07:59.236 21:23:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:59.236 21:23:48 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:59.236 21:23:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:59.236 21:23:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.236 21:23:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:59.236 ************************************ 00:07:59.236 START TEST nvmf_target_discovery 00:07:59.236 ************************************ 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:59.236 * Looking for test storage... 00:07:59.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:59.236 21:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.823 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:05.824 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:05.824 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:05.824 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:05.824 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:05.824 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:06.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:08:06.085 00:08:06.085 --- 10.0.0.2 ping statistics --- 00:08:06.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.085 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.428 ms 00:08:06.085 00:08:06.085 --- 10.0.0.1 ping statistics --- 00:08:06.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.085 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2001792 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2001792 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2001792 ']' 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.085 21:23:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.085 [2024-07-15 21:23:55.869725] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:08:06.085 [2024-07-15 21:23:55.869788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.346 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.346 [2024-07-15 21:23:55.940818] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.346 [2024-07-15 21:23:56.016430] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.346 [2024-07-15 21:23:56.016463] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.346 [2024-07-15 21:23:56.016471] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.346 [2024-07-15 21:23:56.016477] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.346 [2024-07-15 21:23:56.016483] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.347 [2024-07-15 21:23:56.016650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.347 [2024-07-15 21:23:56.016663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.347 [2024-07-15 21:23:56.016799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.347 [2024-07-15 21:23:56.016801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.917 [2024-07-15 21:23:56.682759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:06.917 Null1 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.917 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 [2024-07-15 21:23:56.743084] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 Null2 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 Null3 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 Null4 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.179 21:23:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:08:07.441 00:08:07.441 Discovery Log Number of Records 6, Generation counter 6 00:08:07.441 =====Discovery Log Entry 0====== 00:08:07.441 trtype: tcp 00:08:07.441 adrfam: ipv4 00:08:07.441 subtype: current discovery subsystem 00:08:07.441 treq: not required 00:08:07.441 portid: 0 00:08:07.441 trsvcid: 4420 00:08:07.441 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:07.441 traddr: 10.0.0.2 00:08:07.441 eflags: explicit discovery connections, duplicate discovery information 00:08:07.441 sectype: none 00:08:07.441 =====Discovery Log Entry 1====== 00:08:07.441 trtype: tcp 00:08:07.441 adrfam: ipv4 00:08:07.441 subtype: nvme subsystem 00:08:07.441 treq: not required 00:08:07.441 portid: 0 00:08:07.441 trsvcid: 4420 00:08:07.441 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:07.441 traddr: 10.0.0.2 00:08:07.441 eflags: none 00:08:07.441 sectype: none 00:08:07.441 =====Discovery Log Entry 2====== 00:08:07.441 trtype: tcp 00:08:07.441 adrfam: ipv4 00:08:07.441 subtype: nvme subsystem 00:08:07.441 treq: not required 00:08:07.441 portid: 0 00:08:07.441 trsvcid: 4420 00:08:07.441 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:07.441 traddr: 10.0.0.2 00:08:07.441 eflags: none 00:08:07.441 sectype: none 00:08:07.441 =====Discovery Log Entry 3====== 00:08:07.441 trtype: tcp 00:08:07.441 adrfam: ipv4 00:08:07.441 subtype: nvme subsystem 00:08:07.441 treq: not required 00:08:07.441 portid: 0 00:08:07.441 trsvcid: 4420 00:08:07.441 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:07.441 traddr: 10.0.0.2 00:08:07.441 eflags: none 00:08:07.441 sectype: none 00:08:07.441 =====Discovery Log Entry 4====== 00:08:07.441 trtype: tcp 00:08:07.441 adrfam: ipv4 00:08:07.441 subtype: nvme subsystem 00:08:07.441 treq: not required 00:08:07.441 portid: 0 00:08:07.441 trsvcid: 4420 00:08:07.441 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:07.441 traddr: 10.0.0.2 00:08:07.441 eflags: none 00:08:07.441 sectype: none 00:08:07.441 =====Discovery Log Entry 5====== 00:08:07.441 trtype: tcp 00:08:07.441 adrfam: ipv4 00:08:07.441 subtype: discovery subsystem referral 00:08:07.441 treq: not required 00:08:07.441 portid: 0 00:08:07.441 trsvcid: 4430 00:08:07.441 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:07.441 traddr: 10.0.0.2 00:08:07.441 eflags: none 00:08:07.441 sectype: none 00:08:07.441 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:07.441 Perform nvmf subsystem discovery via RPC 00:08:07.441 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:07.441 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.441 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.441 [ 00:08:07.441 { 00:08:07.441 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:07.441 "subtype": "Discovery", 00:08:07.441 "listen_addresses": [ 00:08:07.441 { 00:08:07.441 "trtype": "TCP", 00:08:07.441 "adrfam": "IPv4", 00:08:07.441 "traddr": "10.0.0.2", 00:08:07.441 "trsvcid": "4420" 00:08:07.441 } 00:08:07.441 ], 00:08:07.441 "allow_any_host": true, 00:08:07.441 "hosts": [] 00:08:07.441 }, 00:08:07.441 { 00:08:07.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.441 "subtype": "NVMe", 00:08:07.441 "listen_addresses": [ 00:08:07.441 { 00:08:07.441 "trtype": "TCP", 00:08:07.441 "adrfam": "IPv4", 00:08:07.441 "traddr": "10.0.0.2", 00:08:07.441 "trsvcid": "4420" 00:08:07.441 } 00:08:07.441 ], 00:08:07.441 "allow_any_host": true, 00:08:07.441 "hosts": [], 00:08:07.441 "serial_number": "SPDK00000000000001", 00:08:07.441 "model_number": "SPDK bdev Controller", 00:08:07.441 "max_namespaces": 32, 00:08:07.441 "min_cntlid": 1, 00:08:07.441 "max_cntlid": 65519, 00:08:07.441 "namespaces": [ 00:08:07.441 { 00:08:07.441 "nsid": 1, 00:08:07.441 "bdev_name": "Null1", 00:08:07.441 "name": "Null1", 00:08:07.441 "nguid": "FC638DDF248D47F49F4D2E3C7D015FB4", 00:08:07.441 "uuid": "fc638ddf-248d-47f4-9f4d-2e3c7d015fb4" 00:08:07.441 } 00:08:07.441 ] 00:08:07.441 }, 00:08:07.441 { 00:08:07.441 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:07.441 "subtype": "NVMe", 00:08:07.441 "listen_addresses": [ 00:08:07.441 { 00:08:07.441 "trtype": "TCP", 00:08:07.441 "adrfam": "IPv4", 00:08:07.441 "traddr": "10.0.0.2", 00:08:07.441 "trsvcid": "4420" 00:08:07.441 } 00:08:07.441 ], 00:08:07.441 "allow_any_host": true, 00:08:07.441 "hosts": [], 00:08:07.441 "serial_number": "SPDK00000000000002", 00:08:07.441 "model_number": "SPDK bdev Controller", 00:08:07.441 "max_namespaces": 32, 00:08:07.441 "min_cntlid": 1, 00:08:07.441 "max_cntlid": 65519, 00:08:07.441 "namespaces": [ 00:08:07.441 { 00:08:07.441 "nsid": 1, 00:08:07.441 "bdev_name": "Null2", 00:08:07.441 "name": "Null2", 00:08:07.441 "nguid": "2B1447A9CFD1435792866ABC5DCDD6D8", 00:08:07.441 "uuid": "2b1447a9-cfd1-4357-9286-6abc5dcdd6d8" 00:08:07.441 } 00:08:07.441 ] 00:08:07.441 }, 00:08:07.441 { 00:08:07.441 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:07.441 "subtype": "NVMe", 00:08:07.441 "listen_addresses": [ 00:08:07.441 { 00:08:07.441 "trtype": "TCP", 00:08:07.441 "adrfam": "IPv4", 00:08:07.441 "traddr": "10.0.0.2", 00:08:07.441 "trsvcid": "4420" 00:08:07.441 } 00:08:07.441 ], 00:08:07.441 "allow_any_host": true, 00:08:07.441 "hosts": [], 00:08:07.441 "serial_number": "SPDK00000000000003", 00:08:07.441 "model_number": "SPDK bdev Controller", 00:08:07.441 "max_namespaces": 32, 00:08:07.441 "min_cntlid": 1, 00:08:07.441 "max_cntlid": 65519, 00:08:07.441 "namespaces": [ 00:08:07.441 { 00:08:07.441 "nsid": 1, 00:08:07.441 "bdev_name": "Null3", 00:08:07.441 "name": "Null3", 00:08:07.441 "nguid": "79D883B1664D422D9F27BE39F17CB48F", 00:08:07.441 "uuid": "79d883b1-664d-422d-9f27-be39f17cb48f" 00:08:07.441 } 00:08:07.441 ] 00:08:07.441 }, 00:08:07.441 { 00:08:07.441 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:07.441 "subtype": "NVMe", 00:08:07.441 "listen_addresses": [ 00:08:07.441 { 00:08:07.441 "trtype": "TCP", 00:08:07.441 "adrfam": "IPv4", 00:08:07.441 "traddr": "10.0.0.2", 00:08:07.441 "trsvcid": "4420" 00:08:07.441 } 00:08:07.441 ], 00:08:07.441 "allow_any_host": true, 00:08:07.441 "hosts": [], 00:08:07.441 "serial_number": "SPDK00000000000004", 00:08:07.441 "model_number": "SPDK bdev Controller", 00:08:07.441 "max_namespaces": 32, 00:08:07.441 "min_cntlid": 1, 00:08:07.441 "max_cntlid": 65519, 00:08:07.441 "namespaces": [ 00:08:07.441 { 00:08:07.441 "nsid": 1, 00:08:07.441 "bdev_name": "Null4", 00:08:07.441 "name": "Null4", 00:08:07.441 "nguid": "4805BFDFE0FF433892862F6C2AA226B0", 00:08:07.441 "uuid": "4805bfdf-e0ff-4338-9286-2f6c2aa226b0" 00:08:07.442 } 00:08:07.442 ] 00:08:07.442 } 00:08:07.442 ] 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.442 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.702 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:07.702 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:07.702 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:07.703 rmmod nvme_tcp 00:08:07.703 rmmod nvme_fabrics 00:08:07.703 rmmod nvme_keyring 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2001792 ']' 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2001792 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2001792 ']' 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2001792 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2001792 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2001792' 00:08:07.703 killing process with pid 2001792 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2001792 00:08:07.703 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2001792 00:08:07.963 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:07.963 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:07.963 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:07.963 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.963 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:07.963 21:23:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.963 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.963 21:23:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.875 21:23:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:09.875 00:08:09.875 real 0m10.962s 00:08:09.875 user 0m8.211s 00:08:09.875 sys 0m5.553s 00:08:09.875 21:23:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.875 21:23:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:09.875 ************************************ 00:08:09.875 END TEST nvmf_target_discovery 00:08:09.875 ************************************ 00:08:09.875 21:23:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:09.875 21:23:59 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:09.875 21:23:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:09.875 21:23:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.875 21:23:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:09.875 ************************************ 00:08:09.875 START TEST nvmf_referrals 00:08:09.875 ************************************ 00:08:09.875 21:23:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:10.136 * Looking for test storage... 00:08:10.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.136 21:23:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.136 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:10.136 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.136 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.136 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.136 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.136 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.136 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.136 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.136 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.136 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.136 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:10.137 21:23:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:18.281 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:18.281 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:18.281 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:18.282 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:18.282 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:18.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:08:18.282 00:08:18.282 --- 10.0.0.2 ping statistics --- 00:08:18.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.282 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.384 ms 00:08:18.282 00:08:18.282 --- 10.0.0.1 ping statistics --- 00:08:18.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.282 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2006552 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2006552 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2006552 ']' 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:18.282 21:24:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.282 [2024-07-15 21:24:07.036291] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:08:18.282 [2024-07-15 21:24:07.036358] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.282 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.282 [2024-07-15 21:24:07.110270] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.282 [2024-07-15 21:24:07.186872] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.282 [2024-07-15 21:24:07.186910] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.282 [2024-07-15 21:24:07.186918] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.282 [2024-07-15 21:24:07.186925] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.282 [2024-07-15 21:24:07.186930] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.282 [2024-07-15 21:24:07.187067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.282 [2024-07-15 21:24:07.187206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.282 [2024-07-15 21:24:07.187264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.282 [2024-07-15 21:24:07.187265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.282 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.283 [2024-07-15 21:24:07.868812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.283 [2024-07-15 21:24:07.885030] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.283 21:24:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.283 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:18.283 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:18.283 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:18.283 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:18.283 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:18.283 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.283 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:18.283 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:18.544 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.806 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:19.067 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:19.328 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:19.328 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:19.328 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:19.328 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:19.328 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:19.328 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.328 21:24:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:19.328 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:19.328 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:19.328 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:19.328 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:19.328 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.328 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:19.589 rmmod nvme_tcp 00:08:19.589 rmmod nvme_fabrics 00:08:19.589 rmmod nvme_keyring 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2006552 ']' 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2006552 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2006552 ']' 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2006552 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:19.589 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2006552 00:08:19.849 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:19.849 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:19.849 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2006552' 00:08:19.849 killing process with pid 2006552 00:08:19.849 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2006552 00:08:19.849 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2006552 00:08:19.849 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:19.849 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:19.849 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:19.849 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:19.849 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:19.849 21:24:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.849 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.849 21:24:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.388 21:24:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:22.388 00:08:22.388 real 0m11.964s 00:08:22.388 user 0m12.414s 00:08:22.388 sys 0m6.045s 00:08:22.388 21:24:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.388 21:24:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:22.388 ************************************ 00:08:22.388 END TEST nvmf_referrals 00:08:22.388 ************************************ 00:08:22.388 21:24:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:22.388 21:24:11 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:22.388 21:24:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:22.388 21:24:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.388 21:24:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:22.388 ************************************ 00:08:22.388 START TEST nvmf_connect_disconnect 00:08:22.388 ************************************ 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:22.388 * Looking for test storage... 00:08:22.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:22.388 21:24:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:29.085 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:29.086 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:29.086 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:29.086 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:29.086 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:29.086 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.347 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.347 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.347 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:29.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:08:29.347 00:08:29.347 --- 10.0.0.2 ping statistics --- 00:08:29.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.347 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:08:29.347 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:08:29.347 00:08:29.347 --- 10.0.0.1 ping statistics --- 00:08:29.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.347 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:08:29.347 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.347 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:29.348 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:29.348 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.348 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:29.348 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:29.348 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.348 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:29.348 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:29.348 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:29.348 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.348 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:29.348 21:24:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:29.348 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2011732 00:08:29.348 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2011732 00:08:29.348 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:29.348 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2011732 ']' 00:08:29.348 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.348 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.348 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.348 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.348 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:29.348 [2024-07-15 21:24:19.058637] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:08:29.348 [2024-07-15 21:24:19.058707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.348 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.348 [2024-07-15 21:24:19.133336] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.609 [2024-07-15 21:24:19.209064] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.610 [2024-07-15 21:24:19.209104] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.610 [2024-07-15 21:24:19.209112] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.610 [2024-07-15 21:24:19.209118] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.610 [2024-07-15 21:24:19.209129] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.610 [2024-07-15 21:24:19.209197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.610 [2024-07-15 21:24:19.209319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.610 [2024-07-15 21:24:19.209478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.610 [2024-07-15 21:24:19.209479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.181 [2024-07-15 21:24:19.894846] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:30.181 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.182 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.182 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.182 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.182 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.182 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.182 [2024-07-15 21:24:19.954151] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.182 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.182 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:30.182 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:30.182 21:24:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:34.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:48.477 rmmod nvme_tcp 00:08:48.477 rmmod nvme_fabrics 00:08:48.477 rmmod nvme_keyring 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2011732 ']' 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2011732 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2011732 ']' 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2011732 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:48.477 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2011732 00:08:48.738 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:48.738 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:48.738 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2011732' 00:08:48.738 killing process with pid 2011732 00:08:48.738 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2011732 00:08:48.738 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2011732 00:08:48.738 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:48.738 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:48.738 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:48.738 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.738 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:48.738 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.738 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.738 21:24:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.284 21:24:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:51.284 00:08:51.284 real 0m28.827s 00:08:51.284 user 1m18.766s 00:08:51.284 sys 0m6.560s 00:08:51.284 21:24:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.284 21:24:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:51.284 ************************************ 00:08:51.284 END TEST nvmf_connect_disconnect 00:08:51.284 ************************************ 00:08:51.284 21:24:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:51.284 21:24:40 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:51.284 21:24:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:51.284 21:24:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.284 21:24:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:51.284 ************************************ 00:08:51.284 START TEST nvmf_multitarget 00:08:51.284 ************************************ 00:08:51.284 21:24:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:51.284 * Looking for test storage... 00:08:51.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.284 21:24:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.284 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:51.284 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.284 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.284 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.284 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.284 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.284 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.284 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.284 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.284 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.284 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:51.285 21:24:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:57.899 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:57.899 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:57.899 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:57.899 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:57.899 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:58.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:08:58.180 00:08:58.180 --- 10.0.0.2 ping statistics --- 00:08:58.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.180 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:58.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:08:58.180 00:08:58.180 --- 10.0.0.1 ping statistics --- 00:08:58.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.180 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:58.180 21:24:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:58.181 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2019648 00:08:58.181 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2019648 00:08:58.181 21:24:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:58.181 21:24:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2019648 ']' 00:08:58.181 21:24:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.181 21:24:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:58.181 21:24:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.181 21:24:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:58.181 21:24:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:58.181 [2024-07-15 21:24:47.914367] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:08:58.181 [2024-07-15 21:24:47.914433] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.181 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.440 [2024-07-15 21:24:47.986113] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.440 [2024-07-15 21:24:48.061977] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.440 [2024-07-15 21:24:48.062017] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.440 [2024-07-15 21:24:48.062025] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.440 [2024-07-15 21:24:48.062031] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.440 [2024-07-15 21:24:48.062037] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.440 [2024-07-15 21:24:48.062184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.440 [2024-07-15 21:24:48.062389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.440 [2024-07-15 21:24:48.062390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.440 [2024-07-15 21:24:48.062237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.010 21:24:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:59.010 21:24:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:59.010 21:24:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:59.010 21:24:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:59.010 21:24:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:59.010 21:24:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.010 21:24:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:59.010 21:24:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:59.010 21:24:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:59.271 21:24:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:59.271 21:24:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:59.271 "nvmf_tgt_1" 00:08:59.271 21:24:48 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:59.271 "nvmf_tgt_2" 00:08:59.271 21:24:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:59.271 21:24:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:59.531 21:24:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:59.531 21:24:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:59.531 true 00:08:59.531 21:24:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:59.531 true 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.791 rmmod nvme_tcp 00:08:59.791 rmmod nvme_fabrics 00:08:59.791 rmmod nvme_keyring 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2019648 ']' 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2019648 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2019648 ']' 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2019648 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2019648 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2019648' 00:08:59.791 killing process with pid 2019648 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2019648 00:08:59.791 21:24:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2019648 00:09:00.052 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:00.052 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:00.052 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:00.052 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.052 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:00.052 21:24:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.052 21:24:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.052 21:24:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.963 21:24:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:02.224 00:09:02.224 real 0m11.151s 00:09:02.224 user 0m9.299s 00:09:02.224 sys 0m5.671s 00:09:02.224 21:24:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.224 21:24:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:02.224 ************************************ 00:09:02.224 END TEST nvmf_multitarget 00:09:02.224 ************************************ 00:09:02.224 21:24:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:02.224 21:24:51 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:02.224 21:24:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:02.224 21:24:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.224 21:24:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:02.224 ************************************ 00:09:02.224 START TEST nvmf_rpc 00:09:02.224 ************************************ 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:02.224 * Looking for test storage... 00:09:02.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.224 21:24:51 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:02.225 21:24:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:02.225 21:24:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:02.225 21:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:10.360 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:10.360 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:10.360 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:10.360 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:10.360 21:24:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.360 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.360 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.360 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:10.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:09:10.361 00:09:10.361 --- 10.0.0.2 ping statistics --- 00:09:10.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.361 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.402 ms 00:09:10.361 00:09:10.361 --- 10.0.0.1 ping statistics --- 00:09:10.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.361 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2024317 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2024317 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2024317 ']' 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.361 [2024-07-15 21:24:59.163049] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:09:10.361 [2024-07-15 21:24:59.163099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.361 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.361 [2024-07-15 21:24:59.229439] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.361 [2024-07-15 21:24:59.294708] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.361 [2024-07-15 21:24:59.294744] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.361 [2024-07-15 21:24:59.294752] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.361 [2024-07-15 21:24:59.294758] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.361 [2024-07-15 21:24:59.294767] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.361 [2024-07-15 21:24:59.294902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.361 [2024-07-15 21:24:59.295033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.361 [2024-07-15 21:24:59.295181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.361 [2024-07-15 21:24:59.295181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:10.361 "tick_rate": 2400000000, 00:09:10.361 "poll_groups": [ 00:09:10.361 { 00:09:10.361 "name": "nvmf_tgt_poll_group_000", 00:09:10.361 "admin_qpairs": 0, 00:09:10.361 "io_qpairs": 0, 00:09:10.361 "current_admin_qpairs": 0, 00:09:10.361 "current_io_qpairs": 0, 00:09:10.361 "pending_bdev_io": 0, 00:09:10.361 "completed_nvme_io": 0, 00:09:10.361 "transports": [] 00:09:10.361 }, 00:09:10.361 { 00:09:10.361 "name": "nvmf_tgt_poll_group_001", 00:09:10.361 "admin_qpairs": 0, 00:09:10.361 "io_qpairs": 0, 00:09:10.361 "current_admin_qpairs": 0, 00:09:10.361 "current_io_qpairs": 0, 00:09:10.361 "pending_bdev_io": 0, 00:09:10.361 "completed_nvme_io": 0, 00:09:10.361 "transports": [] 00:09:10.361 }, 00:09:10.361 { 00:09:10.361 "name": "nvmf_tgt_poll_group_002", 00:09:10.361 "admin_qpairs": 0, 00:09:10.361 "io_qpairs": 0, 00:09:10.361 "current_admin_qpairs": 0, 00:09:10.361 "current_io_qpairs": 0, 00:09:10.361 "pending_bdev_io": 0, 00:09:10.361 "completed_nvme_io": 0, 00:09:10.361 "transports": [] 00:09:10.361 }, 00:09:10.361 { 00:09:10.361 "name": "nvmf_tgt_poll_group_003", 00:09:10.361 "admin_qpairs": 0, 00:09:10.361 "io_qpairs": 0, 00:09:10.361 "current_admin_qpairs": 0, 00:09:10.361 "current_io_qpairs": 0, 00:09:10.361 "pending_bdev_io": 0, 00:09:10.361 "completed_nvme_io": 0, 00:09:10.361 "transports": [] 00:09:10.361 } 00:09:10.361 ] 00:09:10.361 }' 00:09:10.361 21:24:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:10.361 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:10.361 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:10.361 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:10.361 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:10.361 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:10.361 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:10.361 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.361 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.361 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.361 [2024-07-15 21:25:00.098623] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.361 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.361 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:10.361 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.361 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.361 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.361 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:10.361 "tick_rate": 2400000000, 00:09:10.361 "poll_groups": [ 00:09:10.361 { 00:09:10.361 "name": "nvmf_tgt_poll_group_000", 00:09:10.361 "admin_qpairs": 0, 00:09:10.361 "io_qpairs": 0, 00:09:10.361 "current_admin_qpairs": 0, 00:09:10.361 "current_io_qpairs": 0, 00:09:10.361 "pending_bdev_io": 0, 00:09:10.361 "completed_nvme_io": 0, 00:09:10.361 "transports": [ 00:09:10.361 { 00:09:10.361 "trtype": "TCP" 00:09:10.361 } 00:09:10.361 ] 00:09:10.361 }, 00:09:10.361 { 00:09:10.361 "name": "nvmf_tgt_poll_group_001", 00:09:10.361 "admin_qpairs": 0, 00:09:10.361 "io_qpairs": 0, 00:09:10.361 "current_admin_qpairs": 0, 00:09:10.361 "current_io_qpairs": 0, 00:09:10.361 "pending_bdev_io": 0, 00:09:10.361 "completed_nvme_io": 0, 00:09:10.361 "transports": [ 00:09:10.361 { 00:09:10.361 "trtype": "TCP" 00:09:10.361 } 00:09:10.361 ] 00:09:10.361 }, 00:09:10.361 { 00:09:10.361 "name": "nvmf_tgt_poll_group_002", 00:09:10.361 "admin_qpairs": 0, 00:09:10.361 "io_qpairs": 0, 00:09:10.361 "current_admin_qpairs": 0, 00:09:10.361 "current_io_qpairs": 0, 00:09:10.361 "pending_bdev_io": 0, 00:09:10.361 "completed_nvme_io": 0, 00:09:10.361 "transports": [ 00:09:10.361 { 00:09:10.361 "trtype": "TCP" 00:09:10.361 } 00:09:10.361 ] 00:09:10.361 }, 00:09:10.361 { 00:09:10.361 "name": "nvmf_tgt_poll_group_003", 00:09:10.361 "admin_qpairs": 0, 00:09:10.361 "io_qpairs": 0, 00:09:10.361 "current_admin_qpairs": 0, 00:09:10.361 "current_io_qpairs": 0, 00:09:10.361 "pending_bdev_io": 0, 00:09:10.361 "completed_nvme_io": 0, 00:09:10.361 "transports": [ 00:09:10.361 { 00:09:10.361 "trtype": "TCP" 00:09:10.361 } 00:09:10.361 ] 00:09:10.361 } 00:09:10.361 ] 00:09:10.362 }' 00:09:10.362 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:10.362 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:10.362 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:10.362 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.622 Malloc1 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.622 [2024-07-15 21:25:00.278253] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:09:10.622 [2024-07-15 21:25:00.305053] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:10.622 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:10.622 could not add new controller: failed to write to nvme-fabrics device 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.622 21:25:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:12.532 21:25:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:12.532 21:25:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:12.532 21:25:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:12.532 21:25:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:12.532 21:25:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:14.444 21:25:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:14.444 21:25:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:14.444 21:25:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:14.444 21:25:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:14.444 21:25:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:14.444 21:25:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:14.444 21:25:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.444 21:25:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:14.444 21:25:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:14.444 21:25:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:14.444 21:25:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.444 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:14.444 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.444 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:14.444 21:25:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:14.444 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.444 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.444 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.444 21:25:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.444 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:14.444 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.445 [2024-07-15 21:25:04.060469] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:14.445 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:14.445 could not add new controller: failed to write to nvme-fabrics device 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.445 21:25:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:16.355 21:25:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:16.355 21:25:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:16.355 21:25:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.355 21:25:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:16.355 21:25:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.263 [2024-07-15 21:25:07.827047] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.263 21:25:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:19.644 21:25:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:19.644 21:25:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:19.644 21:25:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.644 21:25:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:19.644 21:25:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.193 [2024-07-15 21:25:11.563935] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.193 21:25:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:23.578 21:25:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:23.578 21:25:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:23.578 21:25:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.578 21:25:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:23.578 21:25:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.577 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.578 [2024-07-15 21:25:15.312965] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.578 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.578 21:25:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:25.578 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.578 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.578 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.578 21:25:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:25.578 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.578 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.578 21:25:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.578 21:25:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:27.488 21:25:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:27.489 21:25:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:27.489 21:25:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.489 21:25:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:27.489 21:25:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:29.401 21:25:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:29.401 21:25:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:29.401 21:25:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.401 21:25:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:29.401 21:25:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.401 21:25:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:29.401 21:25:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.401 21:25:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:29.401 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:29.401 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:29.401 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.401 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:29.401 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.401 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:29.401 21:25:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:29.401 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.401 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.401 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.401 21:25:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.401 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.402 [2024-07-15 21:25:19.080773] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.402 21:25:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:31.315 21:25:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.315 21:25:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:31.315 21:25:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.315 21:25:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:31.315 21:25:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.275 [2024-07-15 21:25:22.802759] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.275 21:25:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:34.657 21:25:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:34.657 21:25:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:34.657 21:25:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.657 21:25:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:34.657 21:25:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:37.200 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:37.200 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:37.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 [2024-07-15 21:25:26.552767] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 [2024-07-15 21:25:26.612898] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 [2024-07-15 21:25:26.677092] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 [2024-07-15 21:25:26.733257] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.201 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.202 [2024-07-15 21:25:26.793459] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:37.202 "tick_rate": 2400000000, 00:09:37.202 "poll_groups": [ 00:09:37.202 { 00:09:37.202 "name": "nvmf_tgt_poll_group_000", 00:09:37.202 "admin_qpairs": 0, 00:09:37.202 "io_qpairs": 224, 00:09:37.202 "current_admin_qpairs": 0, 00:09:37.202 "current_io_qpairs": 0, 00:09:37.202 "pending_bdev_io": 0, 00:09:37.202 "completed_nvme_io": 489, 00:09:37.202 "transports": [ 00:09:37.202 { 00:09:37.202 "trtype": "TCP" 00:09:37.202 } 00:09:37.202 ] 00:09:37.202 }, 00:09:37.202 { 00:09:37.202 "name": "nvmf_tgt_poll_group_001", 00:09:37.202 "admin_qpairs": 1, 00:09:37.202 "io_qpairs": 223, 00:09:37.202 "current_admin_qpairs": 0, 00:09:37.202 "current_io_qpairs": 0, 00:09:37.202 "pending_bdev_io": 0, 00:09:37.202 "completed_nvme_io": 224, 00:09:37.202 "transports": [ 00:09:37.202 { 00:09:37.202 "trtype": "TCP" 00:09:37.202 } 00:09:37.202 ] 00:09:37.202 }, 00:09:37.202 { 00:09:37.202 "name": "nvmf_tgt_poll_group_002", 00:09:37.202 "admin_qpairs": 6, 00:09:37.202 "io_qpairs": 218, 00:09:37.202 "current_admin_qpairs": 0, 00:09:37.202 "current_io_qpairs": 0, 00:09:37.202 "pending_bdev_io": 0, 00:09:37.202 "completed_nvme_io": 250, 00:09:37.202 "transports": [ 00:09:37.202 { 00:09:37.202 "trtype": "TCP" 00:09:37.202 } 00:09:37.202 ] 00:09:37.202 }, 00:09:37.202 { 00:09:37.202 "name": "nvmf_tgt_poll_group_003", 00:09:37.202 "admin_qpairs": 0, 00:09:37.202 "io_qpairs": 224, 00:09:37.202 "current_admin_qpairs": 0, 00:09:37.202 "current_io_qpairs": 0, 00:09:37.202 "pending_bdev_io": 0, 00:09:37.202 "completed_nvme_io": 276, 00:09:37.202 "transports": [ 00:09:37.202 { 00:09:37.202 "trtype": "TCP" 00:09:37.202 } 00:09:37.202 ] 00:09:37.202 } 00:09:37.202 ] 00:09:37.202 }' 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:37.202 21:25:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:37.202 rmmod nvme_tcp 00:09:37.202 rmmod nvme_fabrics 00:09:37.202 rmmod nvme_keyring 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2024317 ']' 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2024317 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2024317 ']' 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2024317 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2024317 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2024317' 00:09:37.463 killing process with pid 2024317 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2024317 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2024317 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.463 21:25:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.009 21:25:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:40.009 00:09:40.009 real 0m37.433s 00:09:40.009 user 1m53.528s 00:09:40.009 sys 0m7.160s 00:09:40.009 21:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:40.009 21:25:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.009 ************************************ 00:09:40.009 END TEST nvmf_rpc 00:09:40.009 ************************************ 00:09:40.009 21:25:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:40.009 21:25:29 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:40.009 21:25:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:40.009 21:25:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.009 21:25:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:40.009 ************************************ 00:09:40.009 START TEST nvmf_invalid 00:09:40.009 ************************************ 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:40.009 * Looking for test storage... 00:09:40.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:40.009 21:25:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:46.593 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:46.593 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.593 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:46.594 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:46.594 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.594 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:46.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:09:46.855 00:09:46.855 --- 10.0.0.2 ping statistics --- 00:09:46.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.855 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.382 ms 00:09:46.855 00:09:46.855 --- 10.0.0.1 ping statistics --- 00:09:46.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.855 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2034095 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2034095 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2034095 ']' 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:46.855 21:25:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:46.855 [2024-07-15 21:25:36.551039] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:09:46.855 [2024-07-15 21:25:36.551107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.855 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.855 [2024-07-15 21:25:36.625510] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.115 [2024-07-15 21:25:36.701271] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.115 [2024-07-15 21:25:36.701311] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.115 [2024-07-15 21:25:36.701319] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.115 [2024-07-15 21:25:36.701325] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.115 [2024-07-15 21:25:36.701331] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.115 [2024-07-15 21:25:36.701395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.115 [2024-07-15 21:25:36.701511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.115 [2024-07-15 21:25:36.701642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.115 [2024-07-15 21:25:36.701643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.692 21:25:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.692 21:25:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:47.692 21:25:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:47.692 21:25:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:47.692 21:25:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:47.692 21:25:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.692 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:47.692 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10045 00:09:47.960 [2024-07-15 21:25:37.513210] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:47.960 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:47.960 { 00:09:47.960 "nqn": "nqn.2016-06.io.spdk:cnode10045", 00:09:47.960 "tgt_name": "foobar", 00:09:47.960 "method": "nvmf_create_subsystem", 00:09:47.960 "req_id": 1 00:09:47.960 } 00:09:47.960 Got JSON-RPC error response 00:09:47.960 response: 00:09:47.960 { 00:09:47.960 "code": -32603, 00:09:47.960 "message": "Unable to find target foobar" 00:09:47.960 }' 00:09:47.960 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:47.960 { 00:09:47.960 "nqn": "nqn.2016-06.io.spdk:cnode10045", 00:09:47.960 "tgt_name": "foobar", 00:09:47.960 "method": "nvmf_create_subsystem", 00:09:47.960 "req_id": 1 00:09:47.960 } 00:09:47.960 Got JSON-RPC error response 00:09:47.960 response: 00:09:47.960 { 00:09:47.960 "code": -32603, 00:09:47.960 "message": "Unable to find target foobar" 00:09:47.960 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:47.960 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:47.960 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24977 00:09:47.960 [2024-07-15 21:25:37.689771] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24977: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:47.960 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:47.960 { 00:09:47.960 "nqn": "nqn.2016-06.io.spdk:cnode24977", 00:09:47.960 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:47.960 "method": "nvmf_create_subsystem", 00:09:47.960 "req_id": 1 00:09:47.960 } 00:09:47.960 Got JSON-RPC error response 00:09:47.960 response: 00:09:47.960 { 00:09:47.960 "code": -32602, 00:09:47.960 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:47.960 }' 00:09:47.960 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:47.960 { 00:09:47.960 "nqn": "nqn.2016-06.io.spdk:cnode24977", 00:09:47.960 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:47.960 "method": "nvmf_create_subsystem", 00:09:47.960 "req_id": 1 00:09:47.960 } 00:09:47.960 Got JSON-RPC error response 00:09:47.960 response: 00:09:47.960 { 00:09:47.960 "code": -32602, 00:09:47.960 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:47.960 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:47.960 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:47.960 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1126 00:09:48.254 [2024-07-15 21:25:37.866376] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1126: invalid model number 'SPDK_Controller' 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:48.254 { 00:09:48.254 "nqn": "nqn.2016-06.io.spdk:cnode1126", 00:09:48.254 "model_number": "SPDK_Controller\u001f", 00:09:48.254 "method": "nvmf_create_subsystem", 00:09:48.254 "req_id": 1 00:09:48.254 } 00:09:48.254 Got JSON-RPC error response 00:09:48.254 response: 00:09:48.254 { 00:09:48.254 "code": -32602, 00:09:48.254 "message": "Invalid MN SPDK_Controller\u001f" 00:09:48.254 }' 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:48.254 { 00:09:48.254 "nqn": "nqn.2016-06.io.spdk:cnode1126", 00:09:48.254 "model_number": "SPDK_Controller\u001f", 00:09:48.254 "method": "nvmf_create_subsystem", 00:09:48.254 "req_id": 1 00:09:48.254 } 00:09:48.254 Got JSON-RPC error response 00:09:48.254 response: 00:09:48.254 { 00:09:48.254 "code": -32602, 00:09:48.254 "message": "Invalid MN SPDK_Controller\u001f" 00:09:48.254 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.254 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.255 21:25:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.255 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ X == \- ]] 00:09:48.514 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'X6K-O!VuvxMz)Hy&{wFBr' 00:09:48.514 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'X6K-O!VuvxMz)Hy&{wFBr' nqn.2016-06.io.spdk:cnode8518 00:09:48.514 [2024-07-15 21:25:38.203442] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8518: invalid serial number 'X6K-O!VuvxMz)Hy&{wFBr' 00:09:48.514 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:48.514 { 00:09:48.514 "nqn": "nqn.2016-06.io.spdk:cnode8518", 00:09:48.514 "serial_number": "X6K-O!VuvxMz)Hy&{wFBr", 00:09:48.514 "method": "nvmf_create_subsystem", 00:09:48.514 "req_id": 1 00:09:48.514 } 00:09:48.514 Got JSON-RPC error response 00:09:48.514 response: 00:09:48.514 { 00:09:48.514 "code": -32602, 00:09:48.514 "message": "Invalid SN X6K-O!VuvxMz)Hy&{wFBr" 00:09:48.514 }' 00:09:48.514 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:48.514 { 00:09:48.514 "nqn": "nqn.2016-06.io.spdk:cnode8518", 00:09:48.514 "serial_number": "X6K-O!VuvxMz)Hy&{wFBr", 00:09:48.514 "method": "nvmf_create_subsystem", 00:09:48.514 "req_id": 1 00:09:48.514 } 00:09:48.514 Got JSON-RPC error response 00:09:48.514 response: 00:09:48.514 { 00:09:48.514 "code": -32602, 00:09:48.514 "message": "Invalid SN X6K-O!VuvxMz)Hy&{wFBr" 00:09:48.514 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:48.514 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:48.514 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.515 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.775 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:48.775 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:48.775 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:48.775 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.775 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.775 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:48.775 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:48.775 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:48.775 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.775 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.775 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:48.775 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:48.775 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:48.775 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.775 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.775 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ _ == \- ]] 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '_oKq@r)s6W?&eC5$:\eU4M43v'\''p2b;x0`t;,8Ti.2' 00:09:48.776 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '_oKq@r)s6W?&eC5$:\eU4M43v'\''p2b;x0`t;,8Ti.2' nqn.2016-06.io.spdk:cnode15215 00:09:49.035 [2024-07-15 21:25:38.689027] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15215: invalid model number '_oKq@r)s6W?&eC5$:\eU4M43v'p2b;x0`t;,8Ti.2' 00:09:49.035 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:49.035 { 00:09:49.035 "nqn": "nqn.2016-06.io.spdk:cnode15215", 00:09:49.035 "model_number": "_oKq@r)s6W?&eC5$:\\eU4M43v'\''p2b;x0`t;,8Ti.2", 00:09:49.035 "method": "nvmf_create_subsystem", 00:09:49.035 "req_id": 1 00:09:49.035 } 00:09:49.035 Got JSON-RPC error response 00:09:49.035 response: 00:09:49.035 { 00:09:49.035 "code": -32602, 00:09:49.035 "message": "Invalid MN _oKq@r)s6W?&eC5$:\\eU4M43v'\''p2b;x0`t;,8Ti.2" 00:09:49.035 }' 00:09:49.035 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:49.035 { 00:09:49.035 "nqn": "nqn.2016-06.io.spdk:cnode15215", 00:09:49.035 "model_number": "_oKq@r)s6W?&eC5$:\\eU4M43v'p2b;x0`t;,8Ti.2", 00:09:49.035 "method": "nvmf_create_subsystem", 00:09:49.035 "req_id": 1 00:09:49.035 } 00:09:49.035 Got JSON-RPC error response 00:09:49.035 response: 00:09:49.035 { 00:09:49.035 "code": -32602, 00:09:49.035 "message": "Invalid MN _oKq@r)s6W?&eC5$:\\eU4M43v'p2b;x0`t;,8Ti.2" 00:09:49.035 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:49.035 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:49.295 [2024-07-15 21:25:38.861682] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.296 21:25:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:49.296 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:49.296 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:49.296 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:49.296 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:49.296 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:49.557 [2024-07-15 21:25:39.214783] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:49.557 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:49.557 { 00:09:49.557 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:49.557 "listen_address": { 00:09:49.557 "trtype": "tcp", 00:09:49.557 "traddr": "", 00:09:49.557 "trsvcid": "4421" 00:09:49.557 }, 00:09:49.557 "method": "nvmf_subsystem_remove_listener", 00:09:49.557 "req_id": 1 00:09:49.557 } 00:09:49.557 Got JSON-RPC error response 00:09:49.557 response: 00:09:49.557 { 00:09:49.557 "code": -32602, 00:09:49.557 "message": "Invalid parameters" 00:09:49.557 }' 00:09:49.557 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:49.557 { 00:09:49.557 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:49.557 "listen_address": { 00:09:49.557 "trtype": "tcp", 00:09:49.557 "traddr": "", 00:09:49.557 "trsvcid": "4421" 00:09:49.557 }, 00:09:49.557 "method": "nvmf_subsystem_remove_listener", 00:09:49.557 "req_id": 1 00:09:49.557 } 00:09:49.557 Got JSON-RPC error response 00:09:49.557 response: 00:09:49.557 { 00:09:49.557 "code": -32602, 00:09:49.557 "message": "Invalid parameters" 00:09:49.557 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:49.557 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12042 -i 0 00:09:49.849 [2024-07-15 21:25:39.379282] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12042: invalid cntlid range [0-65519] 00:09:49.849 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:49.849 { 00:09:49.849 "nqn": "nqn.2016-06.io.spdk:cnode12042", 00:09:49.849 "min_cntlid": 0, 00:09:49.849 "method": "nvmf_create_subsystem", 00:09:49.849 "req_id": 1 00:09:49.849 } 00:09:49.849 Got JSON-RPC error response 00:09:49.849 response: 00:09:49.849 { 00:09:49.849 "code": -32602, 00:09:49.849 "message": "Invalid cntlid range [0-65519]" 00:09:49.849 }' 00:09:49.849 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:49.849 { 00:09:49.849 "nqn": "nqn.2016-06.io.spdk:cnode12042", 00:09:49.849 "min_cntlid": 0, 00:09:49.849 "method": "nvmf_create_subsystem", 00:09:49.849 "req_id": 1 00:09:49.849 } 00:09:49.849 Got JSON-RPC error response 00:09:49.849 response: 00:09:49.849 { 00:09:49.849 "code": -32602, 00:09:49.849 "message": "Invalid cntlid range [0-65519]" 00:09:49.849 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:49.849 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24717 -i 65520 00:09:49.849 [2024-07-15 21:25:39.551840] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24717: invalid cntlid range [65520-65519] 00:09:49.849 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:49.849 { 00:09:49.849 "nqn": "nqn.2016-06.io.spdk:cnode24717", 00:09:49.849 "min_cntlid": 65520, 00:09:49.849 "method": "nvmf_create_subsystem", 00:09:49.849 "req_id": 1 00:09:49.849 } 00:09:49.849 Got JSON-RPC error response 00:09:49.849 response: 00:09:49.849 { 00:09:49.849 "code": -32602, 00:09:49.849 "message": "Invalid cntlid range [65520-65519]" 00:09:49.849 }' 00:09:49.849 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:49.849 { 00:09:49.849 "nqn": "nqn.2016-06.io.spdk:cnode24717", 00:09:49.849 "min_cntlid": 65520, 00:09:49.849 "method": "nvmf_create_subsystem", 00:09:49.849 "req_id": 1 00:09:49.849 } 00:09:49.849 Got JSON-RPC error response 00:09:49.849 response: 00:09:49.849 { 00:09:49.849 "code": -32602, 00:09:49.849 "message": "Invalid cntlid range [65520-65519]" 00:09:49.849 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:49.849 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14586 -I 0 00:09:50.109 [2024-07-15 21:25:39.724474] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14586: invalid cntlid range [1-0] 00:09:50.109 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:50.109 { 00:09:50.109 "nqn": "nqn.2016-06.io.spdk:cnode14586", 00:09:50.109 "max_cntlid": 0, 00:09:50.109 "method": "nvmf_create_subsystem", 00:09:50.109 "req_id": 1 00:09:50.109 } 00:09:50.109 Got JSON-RPC error response 00:09:50.109 response: 00:09:50.109 { 00:09:50.109 "code": -32602, 00:09:50.109 "message": "Invalid cntlid range [1-0]" 00:09:50.109 }' 00:09:50.109 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:50.109 { 00:09:50.109 "nqn": "nqn.2016-06.io.spdk:cnode14586", 00:09:50.109 "max_cntlid": 0, 00:09:50.109 "method": "nvmf_create_subsystem", 00:09:50.109 "req_id": 1 00:09:50.109 } 00:09:50.109 Got JSON-RPC error response 00:09:50.109 response: 00:09:50.109 { 00:09:50.109 "code": -32602, 00:09:50.109 "message": "Invalid cntlid range [1-0]" 00:09:50.109 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:50.109 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2216 -I 65520 00:09:50.109 [2024-07-15 21:25:39.896974] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2216: invalid cntlid range [1-65520] 00:09:50.369 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:50.369 { 00:09:50.369 "nqn": "nqn.2016-06.io.spdk:cnode2216", 00:09:50.369 "max_cntlid": 65520, 00:09:50.369 "method": "nvmf_create_subsystem", 00:09:50.369 "req_id": 1 00:09:50.369 } 00:09:50.369 Got JSON-RPC error response 00:09:50.369 response: 00:09:50.369 { 00:09:50.369 "code": -32602, 00:09:50.369 "message": "Invalid cntlid range [1-65520]" 00:09:50.369 }' 00:09:50.369 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:50.369 { 00:09:50.369 "nqn": "nqn.2016-06.io.spdk:cnode2216", 00:09:50.369 "max_cntlid": 65520, 00:09:50.369 "method": "nvmf_create_subsystem", 00:09:50.369 "req_id": 1 00:09:50.369 } 00:09:50.369 Got JSON-RPC error response 00:09:50.369 response: 00:09:50.369 { 00:09:50.369 "code": -32602, 00:09:50.369 "message": "Invalid cntlid range [1-65520]" 00:09:50.369 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:50.369 21:25:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21443 -i 6 -I 5 00:09:50.369 [2024-07-15 21:25:40.073591] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21443: invalid cntlid range [6-5] 00:09:50.369 21:25:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:50.369 { 00:09:50.369 "nqn": "nqn.2016-06.io.spdk:cnode21443", 00:09:50.369 "min_cntlid": 6, 00:09:50.369 "max_cntlid": 5, 00:09:50.369 "method": "nvmf_create_subsystem", 00:09:50.369 "req_id": 1 00:09:50.369 } 00:09:50.369 Got JSON-RPC error response 00:09:50.369 response: 00:09:50.369 { 00:09:50.369 "code": -32602, 00:09:50.369 "message": "Invalid cntlid range [6-5]" 00:09:50.369 }' 00:09:50.369 21:25:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:50.369 { 00:09:50.369 "nqn": "nqn.2016-06.io.spdk:cnode21443", 00:09:50.369 "min_cntlid": 6, 00:09:50.369 "max_cntlid": 5, 00:09:50.369 "method": "nvmf_create_subsystem", 00:09:50.369 "req_id": 1 00:09:50.369 } 00:09:50.369 Got JSON-RPC error response 00:09:50.369 response: 00:09:50.369 { 00:09:50.369 "code": -32602, 00:09:50.369 "message": "Invalid cntlid range [6-5]" 00:09:50.369 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:50.369 21:25:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:50.629 { 00:09:50.629 "name": "foobar", 00:09:50.629 "method": "nvmf_delete_target", 00:09:50.629 "req_id": 1 00:09:50.629 } 00:09:50.629 Got JSON-RPC error response 00:09:50.629 response: 00:09:50.629 { 00:09:50.629 "code": -32602, 00:09:50.629 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:50.629 }' 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:50.629 { 00:09:50.629 "name": "foobar", 00:09:50.629 "method": "nvmf_delete_target", 00:09:50.629 "req_id": 1 00:09:50.629 } 00:09:50.629 Got JSON-RPC error response 00:09:50.629 response: 00:09:50.629 { 00:09:50.629 "code": -32602, 00:09:50.629 "message": "The specified target doesn't exist, cannot delete it." 00:09:50.629 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:50.629 rmmod nvme_tcp 00:09:50.629 rmmod nvme_fabrics 00:09:50.629 rmmod nvme_keyring 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2034095 ']' 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2034095 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2034095 ']' 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2034095 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2034095 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2034095' 00:09:50.629 killing process with pid 2034095 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2034095 00:09:50.629 21:25:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2034095 00:09:50.890 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:50.890 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:50.890 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:50.890 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.890 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:50.890 21:25:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.890 21:25:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.890 21:25:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.799 21:25:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:52.799 00:09:52.799 real 0m13.172s 00:09:52.799 user 0m19.249s 00:09:52.799 sys 0m6.059s 00:09:52.799 21:25:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.799 21:25:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:52.799 ************************************ 00:09:52.799 END TEST nvmf_invalid 00:09:52.799 ************************************ 00:09:52.799 21:25:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:52.799 21:25:42 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:52.799 21:25:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:52.799 21:25:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.799 21:25:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:53.065 ************************************ 00:09:53.065 START TEST nvmf_abort 00:09:53.065 ************************************ 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:53.065 * Looking for test storage... 00:09:53.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.065 21:25:42 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:53.066 21:25:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:59.676 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:59.676 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:59.676 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:59.676 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:59.676 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:59.677 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:59.677 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:59.677 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.677 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.677 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.677 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:59.677 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.677 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.677 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:59.677 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.677 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.677 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:59.677 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:59.677 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.677 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:59.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:09:59.937 00:09:59.937 --- 10.0.0.2 ping statistics --- 00:09:59.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.937 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:09:59.937 00:09:59.937 --- 10.0.0.1 ping statistics --- 00:09:59.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.937 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:59.937 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:00.199 21:25:49 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:00.199 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.199 21:25:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:00.199 21:25:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.199 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2039041 00:10:00.199 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2039041 00:10:00.199 21:25:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:00.199 21:25:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2039041 ']' 00:10:00.199 21:25:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.199 21:25:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.199 21:25:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.199 21:25:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.199 21:25:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.199 [2024-07-15 21:25:49.823229] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:10:00.199 [2024-07-15 21:25:49.823282] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.199 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.199 [2024-07-15 21:25:49.907319] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.199 [2024-07-15 21:25:49.995453] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.199 [2024-07-15 21:25:49.995510] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.199 [2024-07-15 21:25:49.995518] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.199 [2024-07-15 21:25:49.995525] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.199 [2024-07-15 21:25:49.995531] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.199 [2024-07-15 21:25:49.995673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.199 [2024-07-15 21:25:49.995838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.199 [2024-07-15 21:25:49.995840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:01.141 [2024-07-15 21:25:50.632120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:01.141 Malloc0 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:01.141 Delay0 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:01.141 [2024-07-15 21:25:50.710465] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.141 21:25:50 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:01.141 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.141 [2024-07-15 21:25:50.830908] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:03.685 Initializing NVMe Controllers 00:10:03.685 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:03.685 controller IO queue size 128 less than required 00:10:03.685 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:03.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:03.686 Initialization complete. Launching workers. 00:10:03.686 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34343 00:10:03.686 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34404, failed to submit 62 00:10:03.686 success 34347, unsuccess 57, failed 0 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:03.686 rmmod nvme_tcp 00:10:03.686 rmmod nvme_fabrics 00:10:03.686 rmmod nvme_keyring 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2039041 ']' 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2039041 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2039041 ']' 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2039041 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:03.686 21:25:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2039041 00:10:03.686 21:25:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:03.686 21:25:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:03.686 21:25:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2039041' 00:10:03.686 killing process with pid 2039041 00:10:03.686 21:25:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2039041 00:10:03.686 21:25:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2039041 00:10:03.686 21:25:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:03.686 21:25:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:03.686 21:25:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:03.686 21:25:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:03.686 21:25:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:03.686 21:25:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.686 21:25:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:03.686 21:25:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.596 21:25:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:05.596 00:10:05.596 real 0m12.645s 00:10:05.596 user 0m13.062s 00:10:05.596 sys 0m6.195s 00:10:05.596 21:25:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.596 21:25:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:05.596 ************************************ 00:10:05.596 END TEST nvmf_abort 00:10:05.596 ************************************ 00:10:05.596 21:25:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:05.596 21:25:55 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:05.596 21:25:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:05.596 21:25:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.596 21:25:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:05.596 ************************************ 00:10:05.596 START TEST nvmf_ns_hotplug_stress 00:10:05.596 ************************************ 00:10:05.596 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:05.856 * Looking for test storage... 00:10:05.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:05.857 21:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:12.442 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:12.442 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:12.442 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:12.442 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.442 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.705 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.705 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.705 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:12.705 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.705 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.705 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.705 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:12.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:10:12.705 00:10:12.705 --- 10.0.0.2 ping statistics --- 00:10:12.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.705 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:10:12.705 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:10:12.705 00:10:12.705 --- 10.0.0.1 ping statistics --- 00:10:12.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.705 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:10:12.705 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.705 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:12.705 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:12.705 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2044013 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2044013 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2044013 ']' 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.706 21:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.967 [2024-07-15 21:26:02.557313] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:10:12.967 [2024-07-15 21:26:02.557377] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.967 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.967 [2024-07-15 21:26:02.646815] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:12.967 [2024-07-15 21:26:02.739288] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.967 [2024-07-15 21:26:02.739343] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.967 [2024-07-15 21:26:02.739351] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.967 [2024-07-15 21:26:02.739358] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.967 [2024-07-15 21:26:02.739364] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.967 [2024-07-15 21:26:02.739528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.967 [2024-07-15 21:26:02.739674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.967 [2024-07-15 21:26:02.739675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.909 21:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.909 21:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:13.909 21:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.909 21:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:13.909 21:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.909 21:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.909 21:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:13.909 21:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:13.910 [2024-07-15 21:26:03.525556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.910 21:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:14.170 21:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.170 [2024-07-15 21:26:03.858990] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.170 21:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:14.431 21:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:14.431 Malloc0 00:10:14.431 21:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:14.692 Delay0 00:10:14.692 21:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.953 21:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:14.953 NULL1 00:10:14.953 21:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:15.213 21:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:15.213 21:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2044417 00:10:15.213 21:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:15.213 21:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.213 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.474 21:26:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.474 21:26:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:15.474 21:26:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:15.735 [2024-07-15 21:26:05.369781] bdev.c:5033:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:10:15.735 true 00:10:15.735 21:26:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:15.735 21:26:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.000 21:26:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.000 21:26:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:16.000 21:26:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:16.299 true 00:10:16.299 21:26:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:16.299 21:26:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.299 21:26:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.560 21:26:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:16.560 21:26:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:16.560 true 00:10:16.821 21:26:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:16.821 21:26:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.821 21:26:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.082 21:26:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:17.082 21:26:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:17.082 true 00:10:17.354 21:26:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:17.354 21:26:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.354 21:26:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.616 21:26:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:17.616 21:26:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:17.616 true 00:10:17.616 21:26:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:17.616 21:26:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.877 21:26:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.138 21:26:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:18.138 21:26:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:18.138 true 00:10:18.138 21:26:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:18.138 21:26:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.399 21:26:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.659 21:26:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:18.659 21:26:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:18.659 true 00:10:18.659 21:26:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:18.659 21:26:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.920 21:26:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.920 21:26:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:18.920 21:26:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:19.180 true 00:10:19.180 21:26:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:19.180 21:26:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.440 21:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.440 21:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:19.440 21:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:19.702 true 00:10:19.702 21:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:19.702 21:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.962 21:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.962 21:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:19.962 21:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:20.223 true 00:10:20.224 21:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:20.224 21:26:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.498 21:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.499 21:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:20.499 21:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:20.769 true 00:10:20.769 21:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:20.769 21:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.029 21:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.029 21:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:21.029 21:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:21.290 true 00:10:21.290 21:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:21.290 21:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.290 21:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.551 21:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:21.551 21:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:21.812 true 00:10:21.812 21:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:21.813 21:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.813 21:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.074 21:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:22.074 21:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:22.336 true 00:10:22.336 21:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:22.336 21:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.336 21:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.605 21:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:22.605 21:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:22.605 true 00:10:22.867 21:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:22.867 21:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.867 21:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.129 21:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:23.129 21:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:23.129 true 00:10:23.390 21:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:23.390 21:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.390 21:26:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.660 21:26:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:23.661 21:26:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:23.661 true 00:10:23.661 21:26:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:23.661 21:26:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.929 21:26:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.189 21:26:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:24.189 21:26:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:24.189 true 00:10:24.189 21:26:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:24.189 21:26:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.451 21:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.712 21:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:24.712 21:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:24.712 true 00:10:24.712 21:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:24.712 21:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.972 21:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.972 21:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:24.972 21:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:25.233 true 00:10:25.233 21:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:25.233 21:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.494 21:26:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.494 21:26:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:25.494 21:26:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:25.755 true 00:10:25.755 21:26:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:25.755 21:26:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.017 21:26:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.017 21:26:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:26.017 21:26:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:26.277 true 00:10:26.277 21:26:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:26.277 21:26:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.539 21:26:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.539 21:26:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:26.539 21:26:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:26.800 true 00:10:26.800 21:26:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:26.800 21:26:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.800 21:26:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.060 21:26:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:27.060 21:26:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:27.321 true 00:10:27.321 21:26:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:27.321 21:26:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.321 21:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.582 21:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:27.582 21:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:27.843 true 00:10:27.843 21:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:27.843 21:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.843 21:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.104 21:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:28.104 21:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:28.104 true 00:10:28.364 21:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:28.364 21:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.364 21:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.624 21:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:28.624 21:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:28.624 true 00:10:28.885 21:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:28.885 21:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.885 21:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.145 21:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:29.145 21:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:29.145 true 00:10:29.145 21:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:29.145 21:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.405 21:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.667 21:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:29.667 21:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:29.667 true 00:10:29.667 21:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:29.667 21:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.927 21:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.187 21:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:30.187 21:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:30.187 true 00:10:30.187 21:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:30.187 21:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.447 21:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.707 21:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:30.707 21:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:30.707 true 00:10:30.707 21:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:30.707 21:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.967 21:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.967 21:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:30.967 21:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:31.227 true 00:10:31.227 21:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:31.227 21:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.488 21:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.488 21:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:31.488 21:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:31.748 true 00:10:31.748 21:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:31.748 21:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.009 21:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.009 21:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:32.009 21:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:32.312 true 00:10:32.313 21:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:32.313 21:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.313 21:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.576 21:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:32.576 21:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:32.839 true 00:10:32.839 21:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:32.839 21:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.839 21:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.100 21:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:33.100 21:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:33.360 true 00:10:33.360 21:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:33.360 21:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.360 21:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.624 21:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:33.624 21:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:33.886 true 00:10:33.886 21:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:33.886 21:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.886 21:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.147 21:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:34.147 21:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:34.147 true 00:10:34.147 21:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:34.147 21:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.408 21:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.669 21:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:34.669 21:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:34.669 true 00:10:34.669 21:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:34.669 21:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.929 21:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.215 21:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:35.215 21:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:35.215 true 00:10:35.215 21:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:35.215 21:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.475 21:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.475 21:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:35.475 21:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:35.736 true 00:10:35.736 21:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:35.736 21:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.997 21:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.997 21:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:35.997 21:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:36.258 true 00:10:36.258 21:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:36.258 21:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.519 21:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.519 21:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:36.519 21:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:36.780 true 00:10:36.780 21:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:36.780 21:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.040 21:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.040 21:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:37.040 21:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:37.300 true 00:10:37.300 21:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:37.300 21:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.560 21:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.560 21:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:37.560 21:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:37.820 true 00:10:37.820 21:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:37.820 21:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.080 21:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.080 21:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:38.080 21:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:38.340 true 00:10:38.340 21:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:38.340 21:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.340 21:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.601 21:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:38.601 21:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:38.861 true 00:10:38.861 21:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:38.861 21:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.861 21:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.122 21:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:39.122 21:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:39.383 true 00:10:39.383 21:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:39.383 21:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.383 21:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.644 21:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:39.644 21:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:39.905 true 00:10:39.905 21:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:39.905 21:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.905 21:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.165 21:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:40.165 21:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:40.426 true 00:10:40.426 21:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:40.426 21:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.426 21:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.688 21:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:40.688 21:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:40.688 true 00:10:40.948 21:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:40.948 21:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.948 21:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.209 21:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:41.209 21:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:41.209 true 00:10:41.469 21:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:41.469 21:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.469 21:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.729 21:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:10:41.729 21:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:41.729 true 00:10:41.729 21:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:41.729 21:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.990 21:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.250 21:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:10:42.250 21:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:10:42.250 true 00:10:42.250 21:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:42.250 21:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.511 21:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.772 21:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:10:42.772 21:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:10:42.772 true 00:10:42.772 21:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:42.772 21:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.033 21:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.293 21:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:10:43.293 21:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:10:43.293 true 00:10:43.293 21:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:43.293 21:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.568 21:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.831 21:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:10:43.831 21:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:10:43.831 true 00:10:43.831 21:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:43.831 21:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.092 21:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.353 21:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:10:44.353 21:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:10:44.353 true 00:10:44.353 21:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:44.353 21:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.614 21:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.614 21:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:10:44.614 21:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:10:44.875 true 00:10:44.875 21:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:44.876 21:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.137 21:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.137 21:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:10:45.137 21:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:10:45.397 true 00:10:45.397 21:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:45.397 21:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.397 Initializing NVMe Controllers 00:10:45.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:45.397 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:10:45.397 Controller IO queue size 128, less than required. 00:10:45.397 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:45.397 WARNING: Some requested NVMe devices were skipped 00:10:45.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:45.397 Initialization complete. Launching workers. 00:10:45.397 ======================================================== 00:10:45.397 Latency(us) 00:10:45.397 Device Information : IOPS MiB/s Average min max 00:10:45.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31081.40 15.18 4118.13 2241.40 10152.57 00:10:45.397 ======================================================== 00:10:45.397 Total : 31081.40 15.18 4118.13 2241.40 10152.57 00:10:45.397 00:10:45.658 21:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.658 21:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:10:45.658 21:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:10:45.919 true 00:10:45.919 21:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2044417 00:10:45.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2044417) - No such process 00:10:45.919 21:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2044417 00:10:45.919 21:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.180 21:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:46.180 21:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:46.180 21:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:46.180 21:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:46.180 21:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.180 21:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:46.441 null0 00:10:46.441 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:46.441 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.441 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:46.702 null1 00:10:46.702 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:46.702 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.702 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:46.702 null2 00:10:46.702 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:46.702 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.702 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:46.962 null3 00:10:46.962 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:46.962 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.962 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:46.962 null4 00:10:47.222 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:47.222 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:47.222 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:47.222 null5 00:10:47.222 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:47.222 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:47.222 21:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:47.481 null6 00:10:47.481 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:47.481 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:47.481 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:47.481 null7 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.741 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2050965 2050966 2050968 2050970 2050972 2050974 2050976 2050978 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:47.742 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.001 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:48.260 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.260 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:48.260 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:48.260 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:48.260 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.260 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:48.260 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:48.260 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:48.260 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.260 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.260 21:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:48.260 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.260 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.260 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:48.260 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.260 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.260 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:48.260 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.260 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.260 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:48.260 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.260 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.260 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:48.260 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.260 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.260 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.583 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.853 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.113 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:49.114 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.114 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.114 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:49.114 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:49.114 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:49.114 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:49.114 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.114 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.374 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:49.374 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:49.374 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.374 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.374 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:49.374 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.374 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.374 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:49.374 21:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:49.374 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:49.634 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.634 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.634 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.635 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.895 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:50.155 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:50.416 21:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.416 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:50.677 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:50.939 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:51.200 21:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:51.200 rmmod nvme_tcp 00:10:51.200 rmmod nvme_fabrics 00:10:51.462 rmmod nvme_keyring 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2044013 ']' 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2044013 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2044013 ']' 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2044013 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2044013 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2044013' 00:10:51.462 killing process with pid 2044013 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2044013 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2044013 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.462 21:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.010 21:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:54.010 00:10:54.010 real 0m47.953s 00:10:54.010 user 3m16.072s 00:10:54.010 sys 0m16.642s 00:10:54.010 21:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.010 21:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.010 ************************************ 00:10:54.010 END TEST nvmf_ns_hotplug_stress 00:10:54.010 ************************************ 00:10:54.010 21:26:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:54.010 21:26:43 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:54.010 21:26:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:54.010 21:26:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.010 21:26:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:54.010 ************************************ 00:10:54.010 START TEST nvmf_connect_stress 00:10:54.010 ************************************ 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:54.010 * Looking for test storage... 00:10:54.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:54.010 21:26:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:00.604 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:00.604 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:00.604 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:00.604 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:00.604 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:00.605 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.605 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.605 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.605 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:00.605 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.605 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.605 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:00.605 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.605 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.605 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:00.605 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:00.605 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.605 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.605 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.605 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:00.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:11:00.866 00:11:00.866 --- 10.0.0.2 ping statistics --- 00:11:00.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.866 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:11:00.866 00:11:00.866 --- 10.0.0.1 ping statistics --- 00:11:00.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.866 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2056118 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2056118 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2056118 ']' 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.866 21:26:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.866 [2024-07-15 21:26:50.657369] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:11:00.866 [2024-07-15 21:26:50.657420] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.127 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.127 [2024-07-15 21:26:50.739597] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:01.127 [2024-07-15 21:26:50.819336] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.127 [2024-07-15 21:26:50.819387] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.127 [2024-07-15 21:26:50.819395] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.127 [2024-07-15 21:26:50.819402] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.127 [2024-07-15 21:26:50.819407] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.127 [2024-07-15 21:26:50.819533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.127 [2024-07-15 21:26:50.819707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.127 [2024-07-15 21:26:50.819708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.700 [2024-07-15 21:26:51.472023] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.700 [2024-07-15 21:26:51.496383] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.700 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.961 NULL1 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2056300 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.962 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.223 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.223 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:02.223 21:26:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.223 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.223 21:26:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.484 21:26:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.484 21:26:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:02.484 21:26:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.484 21:26:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.484 21:26:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.055 21:26:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.055 21:26:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:03.055 21:26:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.055 21:26:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.055 21:26:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.315 21:26:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.315 21:26:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:03.315 21:26:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.315 21:26:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.315 21:26:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.578 21:26:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.578 21:26:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:03.578 21:26:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.578 21:26:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.578 21:26:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.842 21:26:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.842 21:26:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:03.842 21:26:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.842 21:26:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.842 21:26:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.102 21:26:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.102 21:26:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:04.102 21:26:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.102 21:26:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.102 21:26:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.672 21:26:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.672 21:26:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:04.672 21:26:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.672 21:26:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.672 21:26:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.932 21:26:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.932 21:26:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:04.933 21:26:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.933 21:26:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.933 21:26:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.193 21:26:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.193 21:26:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:05.193 21:26:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.193 21:26:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.193 21:26:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.454 21:26:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.454 21:26:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:05.454 21:26:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.454 21:26:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.454 21:26:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.714 21:26:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.714 21:26:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:05.714 21:26:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.714 21:26:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.714 21:26:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.287 21:26:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.287 21:26:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:06.287 21:26:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.287 21:26:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.287 21:26:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.548 21:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.548 21:26:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:06.548 21:26:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.548 21:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.548 21:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.809 21:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.809 21:26:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:06.809 21:26:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.809 21:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.809 21:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.070 21:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.070 21:26:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:07.070 21:26:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.070 21:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.070 21:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.642 21:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.642 21:26:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:07.642 21:26:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.642 21:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.642 21:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.903 21:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.903 21:26:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:07.903 21:26:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.903 21:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.903 21:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.163 21:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.163 21:26:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:08.163 21:26:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.163 21:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.163 21:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.424 21:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.424 21:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:08.424 21:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.424 21:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.424 21:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.697 21:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.697 21:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:08.697 21:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.697 21:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.697 21:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.267 21:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.267 21:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:09.267 21:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.267 21:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.267 21:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.528 21:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.528 21:26:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:09.528 21:26:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.528 21:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.528 21:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.787 21:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.787 21:26:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:09.787 21:26:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.787 21:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.787 21:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.051 21:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.051 21:26:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:10.051 21:26:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.051 21:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.051 21:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.311 21:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.311 21:27:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:10.311 21:27:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.311 21:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.311 21:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.605 21:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.605 21:27:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:10.605 21:27:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.605 21:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.605 21:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.177 21:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.177 21:27:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:11.177 21:27:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.177 21:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.177 21:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.436 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.436 21:27:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:11.436 21:27:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.436 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.436 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.697 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.697 21:27:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:11.697 21:27:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.697 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.697 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.958 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:11.958 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.958 21:27:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2056300 00:11:11.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2056300) - No such process 00:11:11.958 21:27:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2056300 00:11:11.958 21:27:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:11.958 21:27:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:11.958 21:27:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:11.958 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:11.958 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:11.958 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:11.958 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:11.958 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:11.958 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:11.958 rmmod nvme_tcp 00:11:11.958 rmmod nvme_fabrics 00:11:12.219 rmmod nvme_keyring 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2056118 ']' 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2056118 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2056118 ']' 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2056118 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2056118 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2056118' 00:11:12.219 killing process with pid 2056118 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2056118 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2056118 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.219 21:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.764 21:27:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:14.764 00:11:14.764 real 0m20.663s 00:11:14.764 user 0m42.167s 00:11:14.764 sys 0m8.370s 00:11:14.764 21:27:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:14.764 21:27:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.764 ************************************ 00:11:14.764 END TEST nvmf_connect_stress 00:11:14.764 ************************************ 00:11:14.764 21:27:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:14.764 21:27:04 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:14.764 21:27:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:14.764 21:27:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.764 21:27:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:14.764 ************************************ 00:11:14.764 START TEST nvmf_fused_ordering 00:11:14.764 ************************************ 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:14.764 * Looking for test storage... 00:11:14.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:14.764 21:27:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:21.357 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:21.357 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:21.357 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.357 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:21.358 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.358 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.619 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.619 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.619 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:21.619 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.619 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.619 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.619 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:21.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:11:21.619 00:11:21.619 --- 10.0.0.2 ping statistics --- 00:11:21.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.619 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:11:21.619 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:11:21.619 00:11:21.619 --- 10.0.0.1 ping statistics --- 00:11:21.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.620 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:11:21.620 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.620 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:21.620 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:21.620 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.620 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:21.620 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:21.620 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.620 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:21.620 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:21.881 21:27:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:21.881 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:21.881 21:27:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:21.881 21:27:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.881 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2063048 00:11:21.881 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2063048 00:11:21.881 21:27:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:21.881 21:27:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2063048 ']' 00:11:21.881 21:27:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.881 21:27:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:21.881 21:27:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.881 21:27:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:21.881 21:27:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:21.881 [2024-07-15 21:27:11.491221] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:11:21.881 [2024-07-15 21:27:11.491287] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.881 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.881 [2024-07-15 21:27:11.555329] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.881 [2024-07-15 21:27:11.620687] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.881 [2024-07-15 21:27:11.620722] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.881 [2024-07-15 21:27:11.620728] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.881 [2024-07-15 21:27:11.620733] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.881 [2024-07-15 21:27:11.620737] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.881 [2024-07-15 21:27:11.620751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.839 [2024-07-15 21:27:12.350785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.839 [2024-07-15 21:27:12.366993] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.839 NULL1 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.839 21:27:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:22.839 [2024-07-15 21:27:12.423972] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:11:22.839 [2024-07-15 21:27:12.424017] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2063291 ] 00:11:22.840 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.411 Attached to nqn.2016-06.io.spdk:cnode1 00:11:23.411 Namespace ID: 1 size: 1GB 00:11:23.411 fused_ordering(0) 00:11:23.411 fused_ordering(1) 00:11:23.411 fused_ordering(2) 00:11:23.411 fused_ordering(3) 00:11:23.411 fused_ordering(4) 00:11:23.411 fused_ordering(5) 00:11:23.411 fused_ordering(6) 00:11:23.411 fused_ordering(7) 00:11:23.411 fused_ordering(8) 00:11:23.411 fused_ordering(9) 00:11:23.411 fused_ordering(10) 00:11:23.411 fused_ordering(11) 00:11:23.411 fused_ordering(12) 00:11:23.411 fused_ordering(13) 00:11:23.411 fused_ordering(14) 00:11:23.411 fused_ordering(15) 00:11:23.411 fused_ordering(16) 00:11:23.411 fused_ordering(17) 00:11:23.411 fused_ordering(18) 00:11:23.411 fused_ordering(19) 00:11:23.411 fused_ordering(20) 00:11:23.411 fused_ordering(21) 00:11:23.411 fused_ordering(22) 00:11:23.411 fused_ordering(23) 00:11:23.411 fused_ordering(24) 00:11:23.411 fused_ordering(25) 00:11:23.411 fused_ordering(26) 00:11:23.411 fused_ordering(27) 00:11:23.411 fused_ordering(28) 00:11:23.411 fused_ordering(29) 00:11:23.411 fused_ordering(30) 00:11:23.411 fused_ordering(31) 00:11:23.411 fused_ordering(32) 00:11:23.412 fused_ordering(33) 00:11:23.412 fused_ordering(34) 00:11:23.412 fused_ordering(35) 00:11:23.412 fused_ordering(36) 00:11:23.412 fused_ordering(37) 00:11:23.412 fused_ordering(38) 00:11:23.412 fused_ordering(39) 00:11:23.412 fused_ordering(40) 00:11:23.412 fused_ordering(41) 00:11:23.412 fused_ordering(42) 00:11:23.412 fused_ordering(43) 00:11:23.412 fused_ordering(44) 00:11:23.412 fused_ordering(45) 00:11:23.412 fused_ordering(46) 00:11:23.412 fused_ordering(47) 00:11:23.412 fused_ordering(48) 00:11:23.412 fused_ordering(49) 00:11:23.412 fused_ordering(50) 00:11:23.412 fused_ordering(51) 00:11:23.412 fused_ordering(52) 00:11:23.412 fused_ordering(53) 00:11:23.412 fused_ordering(54) 00:11:23.412 fused_ordering(55) 00:11:23.412 fused_ordering(56) 00:11:23.412 fused_ordering(57) 00:11:23.412 fused_ordering(58) 00:11:23.412 fused_ordering(59) 00:11:23.412 fused_ordering(60) 00:11:23.412 fused_ordering(61) 00:11:23.412 fused_ordering(62) 00:11:23.412 fused_ordering(63) 00:11:23.412 fused_ordering(64) 00:11:23.412 fused_ordering(65) 00:11:23.412 fused_ordering(66) 00:11:23.412 fused_ordering(67) 00:11:23.412 fused_ordering(68) 00:11:23.412 fused_ordering(69) 00:11:23.412 fused_ordering(70) 00:11:23.412 fused_ordering(71) 00:11:23.412 fused_ordering(72) 00:11:23.412 fused_ordering(73) 00:11:23.412 fused_ordering(74) 00:11:23.412 fused_ordering(75) 00:11:23.412 fused_ordering(76) 00:11:23.412 fused_ordering(77) 00:11:23.412 fused_ordering(78) 00:11:23.412 fused_ordering(79) 00:11:23.412 fused_ordering(80) 00:11:23.412 fused_ordering(81) 00:11:23.412 fused_ordering(82) 00:11:23.412 fused_ordering(83) 00:11:23.412 fused_ordering(84) 00:11:23.412 fused_ordering(85) 00:11:23.412 fused_ordering(86) 00:11:23.412 fused_ordering(87) 00:11:23.412 fused_ordering(88) 00:11:23.412 fused_ordering(89) 00:11:23.412 fused_ordering(90) 00:11:23.412 fused_ordering(91) 00:11:23.412 fused_ordering(92) 00:11:23.412 fused_ordering(93) 00:11:23.412 fused_ordering(94) 00:11:23.412 fused_ordering(95) 00:11:23.412 fused_ordering(96) 00:11:23.412 fused_ordering(97) 00:11:23.412 fused_ordering(98) 00:11:23.412 fused_ordering(99) 00:11:23.412 fused_ordering(100) 00:11:23.412 fused_ordering(101) 00:11:23.412 fused_ordering(102) 00:11:23.412 fused_ordering(103) 00:11:23.412 fused_ordering(104) 00:11:23.412 fused_ordering(105) 00:11:23.412 fused_ordering(106) 00:11:23.412 fused_ordering(107) 00:11:23.412 fused_ordering(108) 00:11:23.412 fused_ordering(109) 00:11:23.412 fused_ordering(110) 00:11:23.412 fused_ordering(111) 00:11:23.412 fused_ordering(112) 00:11:23.412 fused_ordering(113) 00:11:23.412 fused_ordering(114) 00:11:23.412 fused_ordering(115) 00:11:23.412 fused_ordering(116) 00:11:23.412 fused_ordering(117) 00:11:23.412 fused_ordering(118) 00:11:23.412 fused_ordering(119) 00:11:23.412 fused_ordering(120) 00:11:23.412 fused_ordering(121) 00:11:23.412 fused_ordering(122) 00:11:23.412 fused_ordering(123) 00:11:23.412 fused_ordering(124) 00:11:23.412 fused_ordering(125) 00:11:23.412 fused_ordering(126) 00:11:23.412 fused_ordering(127) 00:11:23.412 fused_ordering(128) 00:11:23.412 fused_ordering(129) 00:11:23.412 fused_ordering(130) 00:11:23.412 fused_ordering(131) 00:11:23.412 fused_ordering(132) 00:11:23.412 fused_ordering(133) 00:11:23.412 fused_ordering(134) 00:11:23.412 fused_ordering(135) 00:11:23.412 fused_ordering(136) 00:11:23.412 fused_ordering(137) 00:11:23.412 fused_ordering(138) 00:11:23.412 fused_ordering(139) 00:11:23.412 fused_ordering(140) 00:11:23.412 fused_ordering(141) 00:11:23.412 fused_ordering(142) 00:11:23.412 fused_ordering(143) 00:11:23.412 fused_ordering(144) 00:11:23.412 fused_ordering(145) 00:11:23.412 fused_ordering(146) 00:11:23.412 fused_ordering(147) 00:11:23.412 fused_ordering(148) 00:11:23.412 fused_ordering(149) 00:11:23.412 fused_ordering(150) 00:11:23.412 fused_ordering(151) 00:11:23.412 fused_ordering(152) 00:11:23.412 fused_ordering(153) 00:11:23.412 fused_ordering(154) 00:11:23.412 fused_ordering(155) 00:11:23.412 fused_ordering(156) 00:11:23.412 fused_ordering(157) 00:11:23.412 fused_ordering(158) 00:11:23.412 fused_ordering(159) 00:11:23.412 fused_ordering(160) 00:11:23.412 fused_ordering(161) 00:11:23.412 fused_ordering(162) 00:11:23.412 fused_ordering(163) 00:11:23.412 fused_ordering(164) 00:11:23.412 fused_ordering(165) 00:11:23.412 fused_ordering(166) 00:11:23.412 fused_ordering(167) 00:11:23.412 fused_ordering(168) 00:11:23.412 fused_ordering(169) 00:11:23.412 fused_ordering(170) 00:11:23.412 fused_ordering(171) 00:11:23.412 fused_ordering(172) 00:11:23.412 fused_ordering(173) 00:11:23.412 fused_ordering(174) 00:11:23.412 fused_ordering(175) 00:11:23.412 fused_ordering(176) 00:11:23.412 fused_ordering(177) 00:11:23.412 fused_ordering(178) 00:11:23.412 fused_ordering(179) 00:11:23.412 fused_ordering(180) 00:11:23.412 fused_ordering(181) 00:11:23.412 fused_ordering(182) 00:11:23.412 fused_ordering(183) 00:11:23.412 fused_ordering(184) 00:11:23.412 fused_ordering(185) 00:11:23.412 fused_ordering(186) 00:11:23.412 fused_ordering(187) 00:11:23.412 fused_ordering(188) 00:11:23.412 fused_ordering(189) 00:11:23.412 fused_ordering(190) 00:11:23.412 fused_ordering(191) 00:11:23.412 fused_ordering(192) 00:11:23.412 fused_ordering(193) 00:11:23.412 fused_ordering(194) 00:11:23.412 fused_ordering(195) 00:11:23.412 fused_ordering(196) 00:11:23.412 fused_ordering(197) 00:11:23.412 fused_ordering(198) 00:11:23.412 fused_ordering(199) 00:11:23.412 fused_ordering(200) 00:11:23.412 fused_ordering(201) 00:11:23.412 fused_ordering(202) 00:11:23.412 fused_ordering(203) 00:11:23.412 fused_ordering(204) 00:11:23.412 fused_ordering(205) 00:11:23.672 fused_ordering(206) 00:11:23.672 fused_ordering(207) 00:11:23.672 fused_ordering(208) 00:11:23.672 fused_ordering(209) 00:11:23.672 fused_ordering(210) 00:11:23.672 fused_ordering(211) 00:11:23.672 fused_ordering(212) 00:11:23.672 fused_ordering(213) 00:11:23.672 fused_ordering(214) 00:11:23.672 fused_ordering(215) 00:11:23.672 fused_ordering(216) 00:11:23.672 fused_ordering(217) 00:11:23.672 fused_ordering(218) 00:11:23.672 fused_ordering(219) 00:11:23.672 fused_ordering(220) 00:11:23.672 fused_ordering(221) 00:11:23.672 fused_ordering(222) 00:11:23.672 fused_ordering(223) 00:11:23.672 fused_ordering(224) 00:11:23.672 fused_ordering(225) 00:11:23.672 fused_ordering(226) 00:11:23.672 fused_ordering(227) 00:11:23.672 fused_ordering(228) 00:11:23.672 fused_ordering(229) 00:11:23.672 fused_ordering(230) 00:11:23.672 fused_ordering(231) 00:11:23.672 fused_ordering(232) 00:11:23.672 fused_ordering(233) 00:11:23.672 fused_ordering(234) 00:11:23.672 fused_ordering(235) 00:11:23.672 fused_ordering(236) 00:11:23.672 fused_ordering(237) 00:11:23.672 fused_ordering(238) 00:11:23.672 fused_ordering(239) 00:11:23.672 fused_ordering(240) 00:11:23.672 fused_ordering(241) 00:11:23.672 fused_ordering(242) 00:11:23.672 fused_ordering(243) 00:11:23.672 fused_ordering(244) 00:11:23.672 fused_ordering(245) 00:11:23.672 fused_ordering(246) 00:11:23.672 fused_ordering(247) 00:11:23.672 fused_ordering(248) 00:11:23.672 fused_ordering(249) 00:11:23.672 fused_ordering(250) 00:11:23.672 fused_ordering(251) 00:11:23.672 fused_ordering(252) 00:11:23.672 fused_ordering(253) 00:11:23.672 fused_ordering(254) 00:11:23.672 fused_ordering(255) 00:11:23.672 fused_ordering(256) 00:11:23.672 fused_ordering(257) 00:11:23.672 fused_ordering(258) 00:11:23.672 fused_ordering(259) 00:11:23.672 fused_ordering(260) 00:11:23.672 fused_ordering(261) 00:11:23.672 fused_ordering(262) 00:11:23.672 fused_ordering(263) 00:11:23.672 fused_ordering(264) 00:11:23.672 fused_ordering(265) 00:11:23.672 fused_ordering(266) 00:11:23.672 fused_ordering(267) 00:11:23.672 fused_ordering(268) 00:11:23.672 fused_ordering(269) 00:11:23.672 fused_ordering(270) 00:11:23.672 fused_ordering(271) 00:11:23.672 fused_ordering(272) 00:11:23.672 fused_ordering(273) 00:11:23.672 fused_ordering(274) 00:11:23.672 fused_ordering(275) 00:11:23.672 fused_ordering(276) 00:11:23.672 fused_ordering(277) 00:11:23.672 fused_ordering(278) 00:11:23.672 fused_ordering(279) 00:11:23.672 fused_ordering(280) 00:11:23.672 fused_ordering(281) 00:11:23.672 fused_ordering(282) 00:11:23.672 fused_ordering(283) 00:11:23.672 fused_ordering(284) 00:11:23.672 fused_ordering(285) 00:11:23.672 fused_ordering(286) 00:11:23.672 fused_ordering(287) 00:11:23.672 fused_ordering(288) 00:11:23.672 fused_ordering(289) 00:11:23.672 fused_ordering(290) 00:11:23.672 fused_ordering(291) 00:11:23.672 fused_ordering(292) 00:11:23.672 fused_ordering(293) 00:11:23.672 fused_ordering(294) 00:11:23.672 fused_ordering(295) 00:11:23.672 fused_ordering(296) 00:11:23.672 fused_ordering(297) 00:11:23.672 fused_ordering(298) 00:11:23.672 fused_ordering(299) 00:11:23.672 fused_ordering(300) 00:11:23.672 fused_ordering(301) 00:11:23.672 fused_ordering(302) 00:11:23.672 fused_ordering(303) 00:11:23.672 fused_ordering(304) 00:11:23.672 fused_ordering(305) 00:11:23.672 fused_ordering(306) 00:11:23.672 fused_ordering(307) 00:11:23.672 fused_ordering(308) 00:11:23.672 fused_ordering(309) 00:11:23.672 fused_ordering(310) 00:11:23.672 fused_ordering(311) 00:11:23.672 fused_ordering(312) 00:11:23.672 fused_ordering(313) 00:11:23.672 fused_ordering(314) 00:11:23.672 fused_ordering(315) 00:11:23.672 fused_ordering(316) 00:11:23.672 fused_ordering(317) 00:11:23.672 fused_ordering(318) 00:11:23.672 fused_ordering(319) 00:11:23.672 fused_ordering(320) 00:11:23.672 fused_ordering(321) 00:11:23.672 fused_ordering(322) 00:11:23.672 fused_ordering(323) 00:11:23.672 fused_ordering(324) 00:11:23.672 fused_ordering(325) 00:11:23.672 fused_ordering(326) 00:11:23.672 fused_ordering(327) 00:11:23.672 fused_ordering(328) 00:11:23.672 fused_ordering(329) 00:11:23.672 fused_ordering(330) 00:11:23.672 fused_ordering(331) 00:11:23.672 fused_ordering(332) 00:11:23.672 fused_ordering(333) 00:11:23.672 fused_ordering(334) 00:11:23.672 fused_ordering(335) 00:11:23.672 fused_ordering(336) 00:11:23.672 fused_ordering(337) 00:11:23.672 fused_ordering(338) 00:11:23.672 fused_ordering(339) 00:11:23.672 fused_ordering(340) 00:11:23.672 fused_ordering(341) 00:11:23.672 fused_ordering(342) 00:11:23.672 fused_ordering(343) 00:11:23.672 fused_ordering(344) 00:11:23.672 fused_ordering(345) 00:11:23.672 fused_ordering(346) 00:11:23.672 fused_ordering(347) 00:11:23.672 fused_ordering(348) 00:11:23.672 fused_ordering(349) 00:11:23.672 fused_ordering(350) 00:11:23.672 fused_ordering(351) 00:11:23.672 fused_ordering(352) 00:11:23.672 fused_ordering(353) 00:11:23.673 fused_ordering(354) 00:11:23.673 fused_ordering(355) 00:11:23.673 fused_ordering(356) 00:11:23.673 fused_ordering(357) 00:11:23.673 fused_ordering(358) 00:11:23.673 fused_ordering(359) 00:11:23.673 fused_ordering(360) 00:11:23.673 fused_ordering(361) 00:11:23.673 fused_ordering(362) 00:11:23.673 fused_ordering(363) 00:11:23.673 fused_ordering(364) 00:11:23.673 fused_ordering(365) 00:11:23.673 fused_ordering(366) 00:11:23.673 fused_ordering(367) 00:11:23.673 fused_ordering(368) 00:11:23.673 fused_ordering(369) 00:11:23.673 fused_ordering(370) 00:11:23.673 fused_ordering(371) 00:11:23.673 fused_ordering(372) 00:11:23.673 fused_ordering(373) 00:11:23.673 fused_ordering(374) 00:11:23.673 fused_ordering(375) 00:11:23.673 fused_ordering(376) 00:11:23.673 fused_ordering(377) 00:11:23.673 fused_ordering(378) 00:11:23.673 fused_ordering(379) 00:11:23.673 fused_ordering(380) 00:11:23.673 fused_ordering(381) 00:11:23.673 fused_ordering(382) 00:11:23.673 fused_ordering(383) 00:11:23.673 fused_ordering(384) 00:11:23.673 fused_ordering(385) 00:11:23.673 fused_ordering(386) 00:11:23.673 fused_ordering(387) 00:11:23.673 fused_ordering(388) 00:11:23.673 fused_ordering(389) 00:11:23.673 fused_ordering(390) 00:11:23.673 fused_ordering(391) 00:11:23.673 fused_ordering(392) 00:11:23.673 fused_ordering(393) 00:11:23.673 fused_ordering(394) 00:11:23.673 fused_ordering(395) 00:11:23.673 fused_ordering(396) 00:11:23.673 fused_ordering(397) 00:11:23.673 fused_ordering(398) 00:11:23.673 fused_ordering(399) 00:11:23.673 fused_ordering(400) 00:11:23.673 fused_ordering(401) 00:11:23.673 fused_ordering(402) 00:11:23.673 fused_ordering(403) 00:11:23.673 fused_ordering(404) 00:11:23.673 fused_ordering(405) 00:11:23.673 fused_ordering(406) 00:11:23.673 fused_ordering(407) 00:11:23.673 fused_ordering(408) 00:11:23.673 fused_ordering(409) 00:11:23.673 fused_ordering(410) 00:11:24.244 fused_ordering(411) 00:11:24.244 fused_ordering(412) 00:11:24.244 fused_ordering(413) 00:11:24.244 fused_ordering(414) 00:11:24.244 fused_ordering(415) 00:11:24.244 fused_ordering(416) 00:11:24.244 fused_ordering(417) 00:11:24.244 fused_ordering(418) 00:11:24.244 fused_ordering(419) 00:11:24.244 fused_ordering(420) 00:11:24.244 fused_ordering(421) 00:11:24.244 fused_ordering(422) 00:11:24.244 fused_ordering(423) 00:11:24.244 fused_ordering(424) 00:11:24.244 fused_ordering(425) 00:11:24.244 fused_ordering(426) 00:11:24.244 fused_ordering(427) 00:11:24.244 fused_ordering(428) 00:11:24.244 fused_ordering(429) 00:11:24.244 fused_ordering(430) 00:11:24.244 fused_ordering(431) 00:11:24.244 fused_ordering(432) 00:11:24.244 fused_ordering(433) 00:11:24.244 fused_ordering(434) 00:11:24.244 fused_ordering(435) 00:11:24.244 fused_ordering(436) 00:11:24.244 fused_ordering(437) 00:11:24.244 fused_ordering(438) 00:11:24.244 fused_ordering(439) 00:11:24.244 fused_ordering(440) 00:11:24.244 fused_ordering(441) 00:11:24.244 fused_ordering(442) 00:11:24.244 fused_ordering(443) 00:11:24.244 fused_ordering(444) 00:11:24.244 fused_ordering(445) 00:11:24.244 fused_ordering(446) 00:11:24.244 fused_ordering(447) 00:11:24.244 fused_ordering(448) 00:11:24.244 fused_ordering(449) 00:11:24.244 fused_ordering(450) 00:11:24.244 fused_ordering(451) 00:11:24.244 fused_ordering(452) 00:11:24.244 fused_ordering(453) 00:11:24.244 fused_ordering(454) 00:11:24.244 fused_ordering(455) 00:11:24.244 fused_ordering(456) 00:11:24.244 fused_ordering(457) 00:11:24.244 fused_ordering(458) 00:11:24.244 fused_ordering(459) 00:11:24.244 fused_ordering(460) 00:11:24.244 fused_ordering(461) 00:11:24.244 fused_ordering(462) 00:11:24.244 fused_ordering(463) 00:11:24.244 fused_ordering(464) 00:11:24.244 fused_ordering(465) 00:11:24.244 fused_ordering(466) 00:11:24.244 fused_ordering(467) 00:11:24.244 fused_ordering(468) 00:11:24.244 fused_ordering(469) 00:11:24.244 fused_ordering(470) 00:11:24.244 fused_ordering(471) 00:11:24.244 fused_ordering(472) 00:11:24.244 fused_ordering(473) 00:11:24.244 fused_ordering(474) 00:11:24.244 fused_ordering(475) 00:11:24.244 fused_ordering(476) 00:11:24.244 fused_ordering(477) 00:11:24.244 fused_ordering(478) 00:11:24.244 fused_ordering(479) 00:11:24.244 fused_ordering(480) 00:11:24.244 fused_ordering(481) 00:11:24.244 fused_ordering(482) 00:11:24.244 fused_ordering(483) 00:11:24.244 fused_ordering(484) 00:11:24.244 fused_ordering(485) 00:11:24.244 fused_ordering(486) 00:11:24.244 fused_ordering(487) 00:11:24.244 fused_ordering(488) 00:11:24.244 fused_ordering(489) 00:11:24.244 fused_ordering(490) 00:11:24.244 fused_ordering(491) 00:11:24.244 fused_ordering(492) 00:11:24.244 fused_ordering(493) 00:11:24.244 fused_ordering(494) 00:11:24.244 fused_ordering(495) 00:11:24.244 fused_ordering(496) 00:11:24.244 fused_ordering(497) 00:11:24.244 fused_ordering(498) 00:11:24.244 fused_ordering(499) 00:11:24.244 fused_ordering(500) 00:11:24.244 fused_ordering(501) 00:11:24.244 fused_ordering(502) 00:11:24.244 fused_ordering(503) 00:11:24.244 fused_ordering(504) 00:11:24.244 fused_ordering(505) 00:11:24.244 fused_ordering(506) 00:11:24.244 fused_ordering(507) 00:11:24.244 fused_ordering(508) 00:11:24.244 fused_ordering(509) 00:11:24.244 fused_ordering(510) 00:11:24.244 fused_ordering(511) 00:11:24.244 fused_ordering(512) 00:11:24.244 fused_ordering(513) 00:11:24.244 fused_ordering(514) 00:11:24.244 fused_ordering(515) 00:11:24.244 fused_ordering(516) 00:11:24.244 fused_ordering(517) 00:11:24.244 fused_ordering(518) 00:11:24.244 fused_ordering(519) 00:11:24.244 fused_ordering(520) 00:11:24.244 fused_ordering(521) 00:11:24.244 fused_ordering(522) 00:11:24.244 fused_ordering(523) 00:11:24.244 fused_ordering(524) 00:11:24.244 fused_ordering(525) 00:11:24.244 fused_ordering(526) 00:11:24.244 fused_ordering(527) 00:11:24.244 fused_ordering(528) 00:11:24.244 fused_ordering(529) 00:11:24.244 fused_ordering(530) 00:11:24.244 fused_ordering(531) 00:11:24.244 fused_ordering(532) 00:11:24.244 fused_ordering(533) 00:11:24.244 fused_ordering(534) 00:11:24.244 fused_ordering(535) 00:11:24.244 fused_ordering(536) 00:11:24.244 fused_ordering(537) 00:11:24.244 fused_ordering(538) 00:11:24.244 fused_ordering(539) 00:11:24.244 fused_ordering(540) 00:11:24.244 fused_ordering(541) 00:11:24.244 fused_ordering(542) 00:11:24.244 fused_ordering(543) 00:11:24.244 fused_ordering(544) 00:11:24.244 fused_ordering(545) 00:11:24.244 fused_ordering(546) 00:11:24.244 fused_ordering(547) 00:11:24.244 fused_ordering(548) 00:11:24.244 fused_ordering(549) 00:11:24.244 fused_ordering(550) 00:11:24.244 fused_ordering(551) 00:11:24.244 fused_ordering(552) 00:11:24.244 fused_ordering(553) 00:11:24.244 fused_ordering(554) 00:11:24.244 fused_ordering(555) 00:11:24.244 fused_ordering(556) 00:11:24.244 fused_ordering(557) 00:11:24.244 fused_ordering(558) 00:11:24.244 fused_ordering(559) 00:11:24.244 fused_ordering(560) 00:11:24.244 fused_ordering(561) 00:11:24.244 fused_ordering(562) 00:11:24.244 fused_ordering(563) 00:11:24.244 fused_ordering(564) 00:11:24.244 fused_ordering(565) 00:11:24.244 fused_ordering(566) 00:11:24.244 fused_ordering(567) 00:11:24.244 fused_ordering(568) 00:11:24.244 fused_ordering(569) 00:11:24.244 fused_ordering(570) 00:11:24.244 fused_ordering(571) 00:11:24.244 fused_ordering(572) 00:11:24.244 fused_ordering(573) 00:11:24.244 fused_ordering(574) 00:11:24.244 fused_ordering(575) 00:11:24.244 fused_ordering(576) 00:11:24.244 fused_ordering(577) 00:11:24.244 fused_ordering(578) 00:11:24.244 fused_ordering(579) 00:11:24.244 fused_ordering(580) 00:11:24.244 fused_ordering(581) 00:11:24.244 fused_ordering(582) 00:11:24.244 fused_ordering(583) 00:11:24.244 fused_ordering(584) 00:11:24.244 fused_ordering(585) 00:11:24.244 fused_ordering(586) 00:11:24.244 fused_ordering(587) 00:11:24.244 fused_ordering(588) 00:11:24.244 fused_ordering(589) 00:11:24.244 fused_ordering(590) 00:11:24.244 fused_ordering(591) 00:11:24.244 fused_ordering(592) 00:11:24.244 fused_ordering(593) 00:11:24.244 fused_ordering(594) 00:11:24.244 fused_ordering(595) 00:11:24.244 fused_ordering(596) 00:11:24.244 fused_ordering(597) 00:11:24.244 fused_ordering(598) 00:11:24.244 fused_ordering(599) 00:11:24.244 fused_ordering(600) 00:11:24.244 fused_ordering(601) 00:11:24.244 fused_ordering(602) 00:11:24.244 fused_ordering(603) 00:11:24.244 fused_ordering(604) 00:11:24.244 fused_ordering(605) 00:11:24.244 fused_ordering(606) 00:11:24.244 fused_ordering(607) 00:11:24.244 fused_ordering(608) 00:11:24.244 fused_ordering(609) 00:11:24.244 fused_ordering(610) 00:11:24.244 fused_ordering(611) 00:11:24.244 fused_ordering(612) 00:11:24.244 fused_ordering(613) 00:11:24.244 fused_ordering(614) 00:11:24.244 fused_ordering(615) 00:11:24.816 fused_ordering(616) 00:11:24.816 fused_ordering(617) 00:11:24.816 fused_ordering(618) 00:11:24.816 fused_ordering(619) 00:11:24.816 fused_ordering(620) 00:11:24.816 fused_ordering(621) 00:11:24.816 fused_ordering(622) 00:11:24.816 fused_ordering(623) 00:11:24.816 fused_ordering(624) 00:11:24.816 fused_ordering(625) 00:11:24.816 fused_ordering(626) 00:11:24.816 fused_ordering(627) 00:11:24.816 fused_ordering(628) 00:11:24.816 fused_ordering(629) 00:11:24.816 fused_ordering(630) 00:11:24.816 fused_ordering(631) 00:11:24.816 fused_ordering(632) 00:11:24.816 fused_ordering(633) 00:11:24.816 fused_ordering(634) 00:11:24.816 fused_ordering(635) 00:11:24.816 fused_ordering(636) 00:11:24.816 fused_ordering(637) 00:11:24.816 fused_ordering(638) 00:11:24.816 fused_ordering(639) 00:11:24.816 fused_ordering(640) 00:11:24.816 fused_ordering(641) 00:11:24.816 fused_ordering(642) 00:11:24.816 fused_ordering(643) 00:11:24.816 fused_ordering(644) 00:11:24.816 fused_ordering(645) 00:11:24.816 fused_ordering(646) 00:11:24.816 fused_ordering(647) 00:11:24.816 fused_ordering(648) 00:11:24.816 fused_ordering(649) 00:11:24.816 fused_ordering(650) 00:11:24.816 fused_ordering(651) 00:11:24.816 fused_ordering(652) 00:11:24.816 fused_ordering(653) 00:11:24.816 fused_ordering(654) 00:11:24.816 fused_ordering(655) 00:11:24.816 fused_ordering(656) 00:11:24.816 fused_ordering(657) 00:11:24.816 fused_ordering(658) 00:11:24.816 fused_ordering(659) 00:11:24.816 fused_ordering(660) 00:11:24.816 fused_ordering(661) 00:11:24.816 fused_ordering(662) 00:11:24.816 fused_ordering(663) 00:11:24.816 fused_ordering(664) 00:11:24.816 fused_ordering(665) 00:11:24.816 fused_ordering(666) 00:11:24.816 fused_ordering(667) 00:11:24.816 fused_ordering(668) 00:11:24.816 fused_ordering(669) 00:11:24.816 fused_ordering(670) 00:11:24.816 fused_ordering(671) 00:11:24.816 fused_ordering(672) 00:11:24.816 fused_ordering(673) 00:11:24.816 fused_ordering(674) 00:11:24.816 fused_ordering(675) 00:11:24.816 fused_ordering(676) 00:11:24.816 fused_ordering(677) 00:11:24.816 fused_ordering(678) 00:11:24.816 fused_ordering(679) 00:11:24.816 fused_ordering(680) 00:11:24.816 fused_ordering(681) 00:11:24.816 fused_ordering(682) 00:11:24.816 fused_ordering(683) 00:11:24.816 fused_ordering(684) 00:11:24.816 fused_ordering(685) 00:11:24.816 fused_ordering(686) 00:11:24.816 fused_ordering(687) 00:11:24.816 fused_ordering(688) 00:11:24.816 fused_ordering(689) 00:11:24.816 fused_ordering(690) 00:11:24.816 fused_ordering(691) 00:11:24.816 fused_ordering(692) 00:11:24.816 fused_ordering(693) 00:11:24.816 fused_ordering(694) 00:11:24.816 fused_ordering(695) 00:11:24.816 fused_ordering(696) 00:11:24.816 fused_ordering(697) 00:11:24.816 fused_ordering(698) 00:11:24.816 fused_ordering(699) 00:11:24.816 fused_ordering(700) 00:11:24.816 fused_ordering(701) 00:11:24.816 fused_ordering(702) 00:11:24.816 fused_ordering(703) 00:11:24.816 fused_ordering(704) 00:11:24.816 fused_ordering(705) 00:11:24.816 fused_ordering(706) 00:11:24.816 fused_ordering(707) 00:11:24.816 fused_ordering(708) 00:11:24.816 fused_ordering(709) 00:11:24.816 fused_ordering(710) 00:11:24.816 fused_ordering(711) 00:11:24.816 fused_ordering(712) 00:11:24.816 fused_ordering(713) 00:11:24.816 fused_ordering(714) 00:11:24.816 fused_ordering(715) 00:11:24.816 fused_ordering(716) 00:11:24.816 fused_ordering(717) 00:11:24.816 fused_ordering(718) 00:11:24.816 fused_ordering(719) 00:11:24.816 fused_ordering(720) 00:11:24.816 fused_ordering(721) 00:11:24.816 fused_ordering(722) 00:11:24.816 fused_ordering(723) 00:11:24.816 fused_ordering(724) 00:11:24.816 fused_ordering(725) 00:11:24.816 fused_ordering(726) 00:11:24.816 fused_ordering(727) 00:11:24.816 fused_ordering(728) 00:11:24.816 fused_ordering(729) 00:11:24.816 fused_ordering(730) 00:11:24.816 fused_ordering(731) 00:11:24.816 fused_ordering(732) 00:11:24.816 fused_ordering(733) 00:11:24.816 fused_ordering(734) 00:11:24.816 fused_ordering(735) 00:11:24.816 fused_ordering(736) 00:11:24.816 fused_ordering(737) 00:11:24.816 fused_ordering(738) 00:11:24.816 fused_ordering(739) 00:11:24.816 fused_ordering(740) 00:11:24.816 fused_ordering(741) 00:11:24.816 fused_ordering(742) 00:11:24.816 fused_ordering(743) 00:11:24.816 fused_ordering(744) 00:11:24.816 fused_ordering(745) 00:11:24.816 fused_ordering(746) 00:11:24.816 fused_ordering(747) 00:11:24.816 fused_ordering(748) 00:11:24.816 fused_ordering(749) 00:11:24.816 fused_ordering(750) 00:11:24.816 fused_ordering(751) 00:11:24.816 fused_ordering(752) 00:11:24.816 fused_ordering(753) 00:11:24.816 fused_ordering(754) 00:11:24.816 fused_ordering(755) 00:11:24.816 fused_ordering(756) 00:11:24.816 fused_ordering(757) 00:11:24.816 fused_ordering(758) 00:11:24.816 fused_ordering(759) 00:11:24.816 fused_ordering(760) 00:11:24.816 fused_ordering(761) 00:11:24.816 fused_ordering(762) 00:11:24.816 fused_ordering(763) 00:11:24.816 fused_ordering(764) 00:11:24.816 fused_ordering(765) 00:11:24.816 fused_ordering(766) 00:11:24.816 fused_ordering(767) 00:11:24.816 fused_ordering(768) 00:11:24.816 fused_ordering(769) 00:11:24.816 fused_ordering(770) 00:11:24.816 fused_ordering(771) 00:11:24.816 fused_ordering(772) 00:11:24.816 fused_ordering(773) 00:11:24.816 fused_ordering(774) 00:11:24.816 fused_ordering(775) 00:11:24.816 fused_ordering(776) 00:11:24.816 fused_ordering(777) 00:11:24.816 fused_ordering(778) 00:11:24.816 fused_ordering(779) 00:11:24.816 fused_ordering(780) 00:11:24.816 fused_ordering(781) 00:11:24.816 fused_ordering(782) 00:11:24.816 fused_ordering(783) 00:11:24.816 fused_ordering(784) 00:11:24.816 fused_ordering(785) 00:11:24.816 fused_ordering(786) 00:11:24.816 fused_ordering(787) 00:11:24.816 fused_ordering(788) 00:11:24.816 fused_ordering(789) 00:11:24.816 fused_ordering(790) 00:11:24.816 fused_ordering(791) 00:11:24.816 fused_ordering(792) 00:11:24.816 fused_ordering(793) 00:11:24.816 fused_ordering(794) 00:11:24.816 fused_ordering(795) 00:11:24.816 fused_ordering(796) 00:11:24.816 fused_ordering(797) 00:11:24.816 fused_ordering(798) 00:11:24.816 fused_ordering(799) 00:11:24.816 fused_ordering(800) 00:11:24.816 fused_ordering(801) 00:11:24.816 fused_ordering(802) 00:11:24.816 fused_ordering(803) 00:11:24.816 fused_ordering(804) 00:11:24.816 fused_ordering(805) 00:11:24.816 fused_ordering(806) 00:11:24.816 fused_ordering(807) 00:11:24.816 fused_ordering(808) 00:11:24.816 fused_ordering(809) 00:11:24.816 fused_ordering(810) 00:11:24.816 fused_ordering(811) 00:11:24.816 fused_ordering(812) 00:11:24.816 fused_ordering(813) 00:11:24.816 fused_ordering(814) 00:11:24.816 fused_ordering(815) 00:11:24.816 fused_ordering(816) 00:11:24.816 fused_ordering(817) 00:11:24.816 fused_ordering(818) 00:11:24.816 fused_ordering(819) 00:11:24.816 fused_ordering(820) 00:11:25.759 fused_ordering(821) 00:11:25.759 fused_ordering(822) 00:11:25.759 fused_ordering(823) 00:11:25.759 fused_ordering(824) 00:11:25.759 fused_ordering(825) 00:11:25.759 fused_ordering(826) 00:11:25.759 fused_ordering(827) 00:11:25.759 fused_ordering(828) 00:11:25.759 fused_ordering(829) 00:11:25.759 fused_ordering(830) 00:11:25.759 fused_ordering(831) 00:11:25.759 fused_ordering(832) 00:11:25.759 fused_ordering(833) 00:11:25.759 fused_ordering(834) 00:11:25.759 fused_ordering(835) 00:11:25.759 fused_ordering(836) 00:11:25.759 fused_ordering(837) 00:11:25.759 fused_ordering(838) 00:11:25.759 fused_ordering(839) 00:11:25.759 fused_ordering(840) 00:11:25.759 fused_ordering(841) 00:11:25.759 fused_ordering(842) 00:11:25.759 fused_ordering(843) 00:11:25.759 fused_ordering(844) 00:11:25.759 fused_ordering(845) 00:11:25.759 fused_ordering(846) 00:11:25.759 fused_ordering(847) 00:11:25.759 fused_ordering(848) 00:11:25.759 fused_ordering(849) 00:11:25.759 fused_ordering(850) 00:11:25.759 fused_ordering(851) 00:11:25.759 fused_ordering(852) 00:11:25.759 fused_ordering(853) 00:11:25.759 fused_ordering(854) 00:11:25.759 fused_ordering(855) 00:11:25.759 fused_ordering(856) 00:11:25.759 fused_ordering(857) 00:11:25.760 fused_ordering(858) 00:11:25.760 fused_ordering(859) 00:11:25.760 fused_ordering(860) 00:11:25.760 fused_ordering(861) 00:11:25.760 fused_ordering(862) 00:11:25.760 fused_ordering(863) 00:11:25.760 fused_ordering(864) 00:11:25.760 fused_ordering(865) 00:11:25.760 fused_ordering(866) 00:11:25.760 fused_ordering(867) 00:11:25.760 fused_ordering(868) 00:11:25.760 fused_ordering(869) 00:11:25.760 fused_ordering(870) 00:11:25.760 fused_ordering(871) 00:11:25.760 fused_ordering(872) 00:11:25.760 fused_ordering(873) 00:11:25.760 fused_ordering(874) 00:11:25.760 fused_ordering(875) 00:11:25.760 fused_ordering(876) 00:11:25.760 fused_ordering(877) 00:11:25.760 fused_ordering(878) 00:11:25.760 fused_ordering(879) 00:11:25.760 fused_ordering(880) 00:11:25.760 fused_ordering(881) 00:11:25.760 fused_ordering(882) 00:11:25.760 fused_ordering(883) 00:11:25.760 fused_ordering(884) 00:11:25.760 fused_ordering(885) 00:11:25.760 fused_ordering(886) 00:11:25.760 fused_ordering(887) 00:11:25.760 fused_ordering(888) 00:11:25.760 fused_ordering(889) 00:11:25.760 fused_ordering(890) 00:11:25.760 fused_ordering(891) 00:11:25.760 fused_ordering(892) 00:11:25.760 fused_ordering(893) 00:11:25.760 fused_ordering(894) 00:11:25.760 fused_ordering(895) 00:11:25.760 fused_ordering(896) 00:11:25.760 fused_ordering(897) 00:11:25.760 fused_ordering(898) 00:11:25.760 fused_ordering(899) 00:11:25.760 fused_ordering(900) 00:11:25.760 fused_ordering(901) 00:11:25.760 fused_ordering(902) 00:11:25.760 fused_ordering(903) 00:11:25.760 fused_ordering(904) 00:11:25.760 fused_ordering(905) 00:11:25.760 fused_ordering(906) 00:11:25.760 fused_ordering(907) 00:11:25.760 fused_ordering(908) 00:11:25.760 fused_ordering(909) 00:11:25.760 fused_ordering(910) 00:11:25.760 fused_ordering(911) 00:11:25.760 fused_ordering(912) 00:11:25.760 fused_ordering(913) 00:11:25.760 fused_ordering(914) 00:11:25.760 fused_ordering(915) 00:11:25.760 fused_ordering(916) 00:11:25.760 fused_ordering(917) 00:11:25.760 fused_ordering(918) 00:11:25.760 fused_ordering(919) 00:11:25.760 fused_ordering(920) 00:11:25.760 fused_ordering(921) 00:11:25.760 fused_ordering(922) 00:11:25.760 fused_ordering(923) 00:11:25.760 fused_ordering(924) 00:11:25.760 fused_ordering(925) 00:11:25.760 fused_ordering(926) 00:11:25.760 fused_ordering(927) 00:11:25.760 fused_ordering(928) 00:11:25.760 fused_ordering(929) 00:11:25.760 fused_ordering(930) 00:11:25.760 fused_ordering(931) 00:11:25.760 fused_ordering(932) 00:11:25.760 fused_ordering(933) 00:11:25.760 fused_ordering(934) 00:11:25.760 fused_ordering(935) 00:11:25.760 fused_ordering(936) 00:11:25.760 fused_ordering(937) 00:11:25.760 fused_ordering(938) 00:11:25.760 fused_ordering(939) 00:11:25.760 fused_ordering(940) 00:11:25.760 fused_ordering(941) 00:11:25.760 fused_ordering(942) 00:11:25.760 fused_ordering(943) 00:11:25.760 fused_ordering(944) 00:11:25.760 fused_ordering(945) 00:11:25.760 fused_ordering(946) 00:11:25.760 fused_ordering(947) 00:11:25.760 fused_ordering(948) 00:11:25.760 fused_ordering(949) 00:11:25.760 fused_ordering(950) 00:11:25.760 fused_ordering(951) 00:11:25.760 fused_ordering(952) 00:11:25.760 fused_ordering(953) 00:11:25.760 fused_ordering(954) 00:11:25.760 fused_ordering(955) 00:11:25.760 fused_ordering(956) 00:11:25.760 fused_ordering(957) 00:11:25.760 fused_ordering(958) 00:11:25.760 fused_ordering(959) 00:11:25.760 fused_ordering(960) 00:11:25.760 fused_ordering(961) 00:11:25.760 fused_ordering(962) 00:11:25.760 fused_ordering(963) 00:11:25.760 fused_ordering(964) 00:11:25.760 fused_ordering(965) 00:11:25.760 fused_ordering(966) 00:11:25.760 fused_ordering(967) 00:11:25.760 fused_ordering(968) 00:11:25.760 fused_ordering(969) 00:11:25.760 fused_ordering(970) 00:11:25.760 fused_ordering(971) 00:11:25.760 fused_ordering(972) 00:11:25.760 fused_ordering(973) 00:11:25.760 fused_ordering(974) 00:11:25.760 fused_ordering(975) 00:11:25.760 fused_ordering(976) 00:11:25.760 fused_ordering(977) 00:11:25.760 fused_ordering(978) 00:11:25.760 fused_ordering(979) 00:11:25.760 fused_ordering(980) 00:11:25.760 fused_ordering(981) 00:11:25.760 fused_ordering(982) 00:11:25.760 fused_ordering(983) 00:11:25.760 fused_ordering(984) 00:11:25.760 fused_ordering(985) 00:11:25.760 fused_ordering(986) 00:11:25.760 fused_ordering(987) 00:11:25.760 fused_ordering(988) 00:11:25.760 fused_ordering(989) 00:11:25.760 fused_ordering(990) 00:11:25.760 fused_ordering(991) 00:11:25.760 fused_ordering(992) 00:11:25.760 fused_ordering(993) 00:11:25.760 fused_ordering(994) 00:11:25.760 fused_ordering(995) 00:11:25.760 fused_ordering(996) 00:11:25.760 fused_ordering(997) 00:11:25.760 fused_ordering(998) 00:11:25.760 fused_ordering(999) 00:11:25.760 fused_ordering(1000) 00:11:25.760 fused_ordering(1001) 00:11:25.760 fused_ordering(1002) 00:11:25.760 fused_ordering(1003) 00:11:25.760 fused_ordering(1004) 00:11:25.760 fused_ordering(1005) 00:11:25.760 fused_ordering(1006) 00:11:25.760 fused_ordering(1007) 00:11:25.760 fused_ordering(1008) 00:11:25.760 fused_ordering(1009) 00:11:25.760 fused_ordering(1010) 00:11:25.760 fused_ordering(1011) 00:11:25.760 fused_ordering(1012) 00:11:25.760 fused_ordering(1013) 00:11:25.760 fused_ordering(1014) 00:11:25.760 fused_ordering(1015) 00:11:25.760 fused_ordering(1016) 00:11:25.760 fused_ordering(1017) 00:11:25.760 fused_ordering(1018) 00:11:25.760 fused_ordering(1019) 00:11:25.760 fused_ordering(1020) 00:11:25.760 fused_ordering(1021) 00:11:25.760 fused_ordering(1022) 00:11:25.760 fused_ordering(1023) 00:11:25.760 21:27:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:25.760 21:27:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:25.760 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:25.760 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:25.760 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:25.760 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:25.760 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:25.761 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:25.761 rmmod nvme_tcp 00:11:25.761 rmmod nvme_fabrics 00:11:25.761 rmmod nvme_keyring 00:11:25.761 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:25.761 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:25.761 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:25.761 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2063048 ']' 00:11:25.761 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2063048 00:11:25.761 21:27:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2063048 ']' 00:11:25.761 21:27:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2063048 00:11:25.761 21:27:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:25.761 21:27:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:25.761 21:27:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2063048 00:11:25.761 21:27:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:25.761 21:27:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:25.761 21:27:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2063048' 00:11:25.761 killing process with pid 2063048 00:11:25.761 21:27:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2063048 00:11:25.761 21:27:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2063048 00:11:26.022 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:26.022 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:26.022 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:26.022 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:26.022 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:26.022 21:27:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.022 21:27:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.022 21:27:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.936 21:27:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:27.936 00:11:27.936 real 0m13.518s 00:11:27.936 user 0m7.428s 00:11:27.936 sys 0m7.351s 00:11:27.936 21:27:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:27.936 21:27:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.936 ************************************ 00:11:27.936 END TEST nvmf_fused_ordering 00:11:27.936 ************************************ 00:11:27.936 21:27:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:27.936 21:27:17 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:27.936 21:27:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:27.936 21:27:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.936 21:27:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:27.936 ************************************ 00:11:27.936 START TEST nvmf_delete_subsystem 00:11:27.936 ************************************ 00:11:27.936 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:28.197 * Looking for test storage... 00:11:28.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.197 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:28.198 21:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.339 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.339 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:36.339 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:36.339 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:36.339 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:36.339 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:36.339 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:36.340 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:36.340 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:36.340 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:36.340 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:36.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:11:36.340 00:11:36.340 --- 10.0.0.2 ping statistics --- 00:11:36.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.340 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:11:36.340 00:11:36.340 --- 10.0.0.1 ping statistics --- 00:11:36.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.340 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:36.340 21:27:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:36.340 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:36.340 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:36.340 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:36.340 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.340 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2068072 00:11:36.340 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2068072 00:11:36.340 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:36.340 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2068072 ']' 00:11:36.340 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.340 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:36.340 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.340 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:36.340 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.340 [2024-07-15 21:27:25.092446] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:11:36.340 [2024-07-15 21:27:25.092512] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.340 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.340 [2024-07-15 21:27:25.163864] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:36.341 [2024-07-15 21:27:25.238519] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.341 [2024-07-15 21:27:25.238557] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.341 [2024-07-15 21:27:25.238564] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.341 [2024-07-15 21:27:25.238571] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.341 [2024-07-15 21:27:25.238576] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.341 [2024-07-15 21:27:25.238710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.341 [2024-07-15 21:27:25.238713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.341 [2024-07-15 21:27:25.910672] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.341 [2024-07-15 21:27:25.926797] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.341 NULL1 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.341 Delay0 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2068151 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:36.341 21:27:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:36.341 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.341 [2024-07-15 21:27:26.011498] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:38.307 21:27:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.307 21:27:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.307 21:27:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 starting I/O failed: -6 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Write completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 starting I/O failed: -6 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.568 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 [2024-07-15 21:27:28.138389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caf390 is same with the state(5) to be set 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 starting I/O failed: -6 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 [2024-07-15 21:27:28.140188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2124000c00 is same with the state(5) to be set 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Write completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:38.569 Read completed with error (sct=0, sc=8) 00:11:39.513 [2024-07-15 21:27:29.110593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb0a70 is same with the state(5) to be set 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Write completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Write completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Write completed with error (sct=0, sc=8) 00:11:39.513 Write completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Write completed with error (sct=0, sc=8) 00:11:39.513 Write completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Write completed with error (sct=0, sc=8) 00:11:39.513 [2024-07-15 21:27:29.140444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f212400d020 is same with the state(5) to be set 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Write completed with error (sct=0, sc=8) 00:11:39.513 Write completed with error (sct=0, sc=8) 00:11:39.513 Write completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Read completed with error (sct=0, sc=8) 00:11:39.513 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 [2024-07-15 21:27:29.140580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f212400d800 is same with the state(5) to be set 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 [2024-07-15 21:27:29.143347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cafe40 is same with the state(5) to be set 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Write completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 Read completed with error (sct=0, sc=8) 00:11:39.514 [2024-07-15 21:27:29.143473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caf7a0 is same with the state(5) to be set 00:11:39.514 Initializing NVMe Controllers 00:11:39.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:39.514 Controller IO queue size 128, less than required. 00:11:39.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:39.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:39.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:39.514 Initialization complete. Launching workers. 00:11:39.514 ======================================================== 00:11:39.514 Latency(us) 00:11:39.514 Device Information : IOPS MiB/s Average min max 00:11:39.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 180.63 0.09 910362.28 303.88 1010352.65 00:11:39.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.77 0.07 1013940.86 200.45 1999569.53 00:11:39.514 ======================================================== 00:11:39.514 Total : 333.40 0.16 957822.91 200.45 1999569.53 00:11:39.514 00:11:39.514 [2024-07-15 21:27:29.143817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb0a70 (9): Bad file descriptor 00:11:39.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:39.514 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.514 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:39.514 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2068151 00:11:39.514 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2068151 00:11:40.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2068151) - No such process 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2068151 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2068151 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2068151 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:40.087 [2024-07-15 21:27:29.674911] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2068987 00:11:40.087 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:40.088 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:40.088 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2068987 00:11:40.088 21:27:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:40.088 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.088 [2024-07-15 21:27:29.742742] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:40.659 21:27:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:40.659 21:27:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2068987 00:11:40.659 21:27:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:40.921 21:27:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:40.921 21:27:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2068987 00:11:40.921 21:27:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:41.492 21:27:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:41.492 21:27:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2068987 00:11:41.492 21:27:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:42.062 21:27:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:42.062 21:27:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2068987 00:11:42.062 21:27:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:42.631 21:27:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:42.631 21:27:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2068987 00:11:42.631 21:27:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:43.201 21:27:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:43.201 21:27:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2068987 00:11:43.201 21:27:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:43.201 Initializing NVMe Controllers 00:11:43.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:43.201 Controller IO queue size 128, less than required. 00:11:43.201 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:43.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:43.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:43.201 Initialization complete. Launching workers. 00:11:43.201 ======================================================== 00:11:43.201 Latency(us) 00:11:43.201 Device Information : IOPS MiB/s Average min max 00:11:43.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002441.00 1000331.75 1042266.05 00:11:43.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003127.10 1000281.66 1041393.91 00:11:43.201 ======================================================== 00:11:43.201 Total : 256.00 0.12 1002784.05 1000281.66 1042266.05 00:11:43.201 00:11:43.460 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:43.460 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2068987 00:11:43.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2068987) - No such process 00:11:43.460 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2068987 00:11:43.460 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:43.460 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:43.460 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:43.460 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:43.460 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:43.460 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:43.461 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:43.461 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:43.461 rmmod nvme_tcp 00:11:43.461 rmmod nvme_fabrics 00:11:43.720 rmmod nvme_keyring 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2068072 ']' 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2068072 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2068072 ']' 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2068072 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2068072 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2068072' 00:11:43.720 killing process with pid 2068072 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2068072 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2068072 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.720 21:27:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.264 21:27:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:46.264 00:11:46.264 real 0m17.845s 00:11:46.264 user 0m30.716s 00:11:46.264 sys 0m6.262s 00:11:46.264 21:27:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.264 21:27:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.264 ************************************ 00:11:46.264 END TEST nvmf_delete_subsystem 00:11:46.264 ************************************ 00:11:46.264 21:27:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:46.264 21:27:35 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:46.264 21:27:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:46.264 21:27:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.264 21:27:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:46.264 ************************************ 00:11:46.264 START TEST nvmf_ns_masking 00:11:46.264 ************************************ 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:46.264 * Looking for test storage... 00:11:46.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c47f74ca-ade6-4d16-a803-a676d54bbe3e 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=861577b4-97fd-46f9-914d-0e120cc67f54 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a1d5b441-0fcb-4994-8eeb-64dbbb0e656f 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:46.264 21:27:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:52.851 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:52.851 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:52.851 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:52.851 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:52.851 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:53.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:11:53.111 00:11:53.111 --- 10.0.0.2 ping statistics --- 00:11:53.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.111 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:11:53.111 00:11:53.111 --- 10.0.0.1 ping statistics --- 00:11:53.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.111 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2073787 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2073787 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2073787 ']' 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:53.111 21:27:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:53.111 [2024-07-15 21:27:42.842462] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:11:53.111 [2024-07-15 21:27:42.842513] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.111 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.111 [2024-07-15 21:27:42.908462] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.372 [2024-07-15 21:27:42.972254] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.372 [2024-07-15 21:27:42.972290] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.373 [2024-07-15 21:27:42.972297] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.373 [2024-07-15 21:27:42.972303] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.373 [2024-07-15 21:27:42.972312] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.373 [2024-07-15 21:27:42.972332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.952 21:27:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.952 21:27:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:53.952 21:27:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:53.952 21:27:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:53.952 21:27:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:53.952 21:27:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.952 21:27:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:54.213 [2024-07-15 21:27:43.778895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.213 21:27:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:54.213 21:27:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:54.213 21:27:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:54.213 Malloc1 00:11:54.213 21:27:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:54.473 Malloc2 00:11:54.473 21:27:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:54.733 21:27:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:54.733 21:27:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.995 [2024-07-15 21:27:44.600852] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.995 21:27:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:54.995 21:27:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a1d5b441-0fcb-4994-8eeb-64dbbb0e656f -a 10.0.0.2 -s 4420 -i 4 00:11:55.257 21:27:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:55.257 21:27:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:55.257 21:27:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.257 21:27:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:55.257 21:27:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:57.170 21:27:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:57.170 21:27:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:57.170 21:27:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.170 21:27:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:57.170 21:27:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.170 21:27:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:57.170 21:27:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:57.170 21:27:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:57.170 21:27:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:57.170 21:27:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:57.170 21:27:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:57.170 21:27:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.170 21:27:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:57.170 [ 0]:0x1 00:11:57.170 21:27:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:57.170 21:27:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.430 21:27:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=881e9ab014ca4f85bd34fa4339ea7974 00:11:57.430 21:27:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 881e9ab014ca4f85bd34fa4339ea7974 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.430 21:27:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:57.430 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:57.430 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.430 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:57.430 [ 0]:0x1 00:11:57.430 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:57.430 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.430 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=881e9ab014ca4f85bd34fa4339ea7974 00:11:57.430 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 881e9ab014ca4f85bd34fa4339ea7974 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.430 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:57.430 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.430 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:57.430 [ 1]:0x2 00:11:57.430 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:57.430 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.690 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=58dacdad8b0a4dfdaf5d9f0f1caba3e3 00:11:57.690 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 58dacdad8b0a4dfdaf5d9f0f1caba3e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.690 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:57.690 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.690 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.952 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:57.952 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:57.952 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a1d5b441-0fcb-4994-8eeb-64dbbb0e656f -a 10.0.0.2 -s 4420 -i 4 00:11:58.212 21:27:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:58.212 21:27:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:58.212 21:27:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.212 21:27:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:58.212 21:27:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:58.212 21:27:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:00.125 21:27:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:00.125 21:27:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:00.125 21:27:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:00.125 21:27:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:00.125 21:27:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.125 21:27:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:00.125 21:27:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:00.126 21:27:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:00.126 21:27:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:00.126 21:27:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:00.126 21:27:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:00.126 21:27:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:00.126 21:27:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:00.126 21:27:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:00.126 21:27:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:00.126 21:27:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:00.126 21:27:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:00.126 21:27:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:00.126 21:27:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:00.126 21:27:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.387 21:27:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:00.387 21:27:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.387 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:00.387 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.387 21:27:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:00.387 21:27:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:00.387 21:27:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:00.387 21:27:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:00.387 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:00.387 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.387 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:00.387 [ 0]:0x2 00:12:00.387 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:00.387 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.387 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=58dacdad8b0a4dfdaf5d9f0f1caba3e3 00:12:00.387 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 58dacdad8b0a4dfdaf5d9f0f1caba3e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.387 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:00.649 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:00.649 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.649 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:00.649 [ 0]:0x1 00:12:00.649 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:00.649 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.649 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=881e9ab014ca4f85bd34fa4339ea7974 00:12:00.649 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 881e9ab014ca4f85bd34fa4339ea7974 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.649 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:00.649 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.649 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:00.649 [ 1]:0x2 00:12:00.649 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:00.649 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.649 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=58dacdad8b0a4dfdaf5d9f0f1caba3e3 00:12:00.649 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 58dacdad8b0a4dfdaf5d9f0f1caba3e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.649 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:00.917 [ 0]:0x2 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=58dacdad8b0a4dfdaf5d9f0f1caba3e3 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 58dacdad8b0a4dfdaf5d9f0f1caba3e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:00.917 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.239 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:01.239 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:01.239 21:27:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a1d5b441-0fcb-4994-8eeb-64dbbb0e656f -a 10.0.0.2 -s 4420 -i 4 00:12:01.500 21:27:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:01.500 21:27:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:01.500 21:27:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.500 21:27:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:01.500 21:27:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:01.500 21:27:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:03.413 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:03.413 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:03.413 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.413 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:03.413 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.413 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:03.413 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:03.413 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:03.413 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:03.413 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:03.413 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:03.413 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:03.413 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.413 [ 0]:0x1 00:12:03.414 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.414 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.414 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=881e9ab014ca4f85bd34fa4339ea7974 00:12:03.414 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 881e9ab014ca4f85bd34fa4339ea7974 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.414 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:03.414 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.414 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:03.414 [ 1]:0x2 00:12:03.414 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.414 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:03.414 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=58dacdad8b0a4dfdaf5d9f0f1caba3e3 00:12:03.414 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 58dacdad8b0a4dfdaf5d9f0f1caba3e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.414 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:03.674 [ 0]:0x2 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:03.674 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=58dacdad8b0a4dfdaf5d9f0f1caba3e3 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 58dacdad8b0a4dfdaf5d9f0f1caba3e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:03.935 [2024-07-15 21:27:53.630797] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:03.935 request: 00:12:03.935 { 00:12:03.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:03.935 "nsid": 2, 00:12:03.935 "host": "nqn.2016-06.io.spdk:host1", 00:12:03.935 "method": "nvmf_ns_remove_host", 00:12:03.935 "req_id": 1 00:12:03.935 } 00:12:03.935 Got JSON-RPC error response 00:12:03.935 response: 00:12:03.935 { 00:12:03.935 "code": -32602, 00:12:03.935 "message": "Invalid parameters" 00:12:03.935 } 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:03.935 [ 0]:0x2 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:03.935 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.196 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=58dacdad8b0a4dfdaf5d9f0f1caba3e3 00:12:04.196 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 58dacdad8b0a4dfdaf5d9f0f1caba3e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.196 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:04.196 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.196 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2076161 00:12:04.196 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.196 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:04.196 21:27:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2076161 /var/tmp/host.sock 00:12:04.196 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2076161 ']' 00:12:04.196 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:04.196 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:04.196 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:04.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:04.196 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:04.196 21:27:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:04.196 [2024-07-15 21:27:53.878806] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:12:04.196 [2024-07-15 21:27:53.878861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2076161 ] 00:12:04.196 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.196 [2024-07-15 21:27:53.954874] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.457 [2024-07-15 21:27:54.019550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.029 21:27:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:05.029 21:27:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:05.029 21:27:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.029 21:27:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:05.290 21:27:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c47f74ca-ade6-4d16-a803-a676d54bbe3e 00:12:05.290 21:27:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:05.290 21:27:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C47F74CAADE64D16A803A676D54BBE3E -i 00:12:05.550 21:27:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 861577b4-97fd-46f9-914d-0e120cc67f54 00:12:05.550 21:27:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:05.550 21:27:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 861577B497FD46F9914D0E120CC67F54 -i 00:12:05.550 21:27:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:05.810 21:27:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:05.810 21:27:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:05.810 21:27:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:06.071 nvme0n1 00:12:06.331 21:27:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:06.331 21:27:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:06.592 nvme1n2 00:12:06.592 21:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:06.592 21:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:06.592 21:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:06.592 21:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:06.592 21:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:06.853 21:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:06.853 21:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:06.853 21:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:06.853 21:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:06.853 21:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c47f74ca-ade6-4d16-a803-a676d54bbe3e == \c\4\7\f\7\4\c\a\-\a\d\e\6\-\4\d\1\6\-\a\8\0\3\-\a\6\7\6\d\5\4\b\b\e\3\e ]] 00:12:06.853 21:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:06.853 21:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:06.853 21:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:07.115 21:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 861577b4-97fd-46f9-914d-0e120cc67f54 == \8\6\1\5\7\7\b\4\-\9\7\f\d\-\4\6\f\9\-\9\1\4\d\-\0\e\1\2\0\c\c\6\7\f\5\4 ]] 00:12:07.115 21:27:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2076161 00:12:07.115 21:27:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2076161 ']' 00:12:07.115 21:27:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2076161 00:12:07.115 21:27:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:07.115 21:27:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:07.115 21:27:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2076161 00:12:07.115 21:27:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:07.115 21:27:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:07.115 21:27:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2076161' 00:12:07.115 killing process with pid 2076161 00:12:07.115 21:27:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2076161 00:12:07.115 21:27:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2076161 00:12:07.376 21:27:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.376 21:27:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:07.376 21:27:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:07.376 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:07.376 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:07.376 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:07.376 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:07.376 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:07.376 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:07.376 rmmod nvme_tcp 00:12:07.637 rmmod nvme_fabrics 00:12:07.637 rmmod nvme_keyring 00:12:07.637 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:07.637 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:07.637 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:07.637 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2073787 ']' 00:12:07.637 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2073787 00:12:07.637 21:27:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2073787 ']' 00:12:07.637 21:27:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2073787 00:12:07.637 21:27:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:07.637 21:27:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:07.637 21:27:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2073787 00:12:07.637 21:27:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:07.637 21:27:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:07.637 21:27:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2073787' 00:12:07.637 killing process with pid 2073787 00:12:07.637 21:27:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2073787 00:12:07.637 21:27:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2073787 00:12:07.897 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:07.897 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:07.897 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:07.897 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.897 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:07.897 21:27:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.897 21:27:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.897 21:27:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.810 21:27:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:09.810 00:12:09.810 real 0m23.881s 00:12:09.810 user 0m24.040s 00:12:09.810 sys 0m7.136s 00:12:09.810 21:27:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.810 21:27:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:09.810 ************************************ 00:12:09.810 END TEST nvmf_ns_masking 00:12:09.810 ************************************ 00:12:09.810 21:27:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:09.810 21:27:59 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:09.810 21:27:59 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:09.810 21:27:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:09.810 21:27:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.810 21:27:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:09.810 ************************************ 00:12:09.810 START TEST nvmf_nvme_cli 00:12:09.810 ************************************ 00:12:09.810 21:27:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:10.071 * Looking for test storage... 00:12:10.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:10.071 21:27:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:16.659 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:16.659 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:16.659 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:16.659 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.659 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.920 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.920 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.920 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:16.920 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.920 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.920 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:17.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:12:17.181 00:12:17.181 --- 10.0.0.2 ping statistics --- 00:12:17.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.181 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.396 ms 00:12:17.181 00:12:17.181 --- 10.0.0.1 ping statistics --- 00:12:17.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.181 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2080991 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2080991 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2080991 ']' 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:17.181 21:28:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:17.181 [2024-07-15 21:28:06.856313] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:12:17.181 [2024-07-15 21:28:06.856375] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.181 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.181 [2024-07-15 21:28:06.925622] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.442 [2024-07-15 21:28:06.994116] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.442 [2024-07-15 21:28:06.994160] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.442 [2024-07-15 21:28:06.994168] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.442 [2024-07-15 21:28:06.994174] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.442 [2024-07-15 21:28:06.994179] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.442 [2024-07-15 21:28:06.994330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.442 [2024-07-15 21:28:06.994448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.442 [2024-07-15 21:28:06.994608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.442 [2024-07-15 21:28:06.994609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.013 [2024-07-15 21:28:07.664815] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.013 Malloc0 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.013 Malloc1 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.013 [2024-07-15 21:28:07.754838] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.013 21:28:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:18.274 00:12:18.274 Discovery Log Number of Records 2, Generation counter 2 00:12:18.274 =====Discovery Log Entry 0====== 00:12:18.274 trtype: tcp 00:12:18.274 adrfam: ipv4 00:12:18.274 subtype: current discovery subsystem 00:12:18.274 treq: not required 00:12:18.274 portid: 0 00:12:18.274 trsvcid: 4420 00:12:18.274 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:18.274 traddr: 10.0.0.2 00:12:18.274 eflags: explicit discovery connections, duplicate discovery information 00:12:18.274 sectype: none 00:12:18.274 =====Discovery Log Entry 1====== 00:12:18.274 trtype: tcp 00:12:18.274 adrfam: ipv4 00:12:18.274 subtype: nvme subsystem 00:12:18.274 treq: not required 00:12:18.274 portid: 0 00:12:18.274 trsvcid: 4420 00:12:18.274 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:18.274 traddr: 10.0.0.2 00:12:18.274 eflags: none 00:12:18.274 sectype: none 00:12:18.274 21:28:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:18.274 21:28:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:18.274 21:28:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:18.274 21:28:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:18.274 21:28:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:18.274 21:28:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:18.274 21:28:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:18.274 21:28:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:18.274 21:28:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:18.274 21:28:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:18.274 21:28:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.205 21:28:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:20.205 21:28:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:20.205 21:28:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.205 21:28:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:20.205 21:28:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:20.205 21:28:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:22.117 /dev/nvme0n1 ]] 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:22.117 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:22.118 rmmod nvme_tcp 00:12:22.118 rmmod nvme_fabrics 00:12:22.118 rmmod nvme_keyring 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2080991 ']' 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2080991 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2080991 ']' 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2080991 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2080991 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2080991' 00:12:22.118 killing process with pid 2080991 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2080991 00:12:22.118 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2080991 00:12:22.379 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:22.379 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:22.379 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:22.379 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:22.379 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:22.379 21:28:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.379 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.379 21:28:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.292 21:28:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:24.292 00:12:24.292 real 0m14.450s 00:12:24.292 user 0m21.970s 00:12:24.292 sys 0m5.741s 00:12:24.292 21:28:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:24.292 21:28:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:24.292 ************************************ 00:12:24.292 END TEST nvmf_nvme_cli 00:12:24.292 ************************************ 00:12:24.292 21:28:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:24.292 21:28:14 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:24.292 21:28:14 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:24.292 21:28:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:24.292 21:28:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.292 21:28:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:24.589 ************************************ 00:12:24.589 START TEST nvmf_vfio_user 00:12:24.589 ************************************ 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:24.589 * Looking for test storage... 00:12:24.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2082497 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2082497' 00:12:24.589 Process pid: 2082497 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2082497 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2082497 ']' 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:24.589 21:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:24.589 [2024-07-15 21:28:14.313699] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:12:24.589 [2024-07-15 21:28:14.313754] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.589 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.589 [2024-07-15 21:28:14.376798] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.850 [2024-07-15 21:28:14.442996] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.850 [2024-07-15 21:28:14.443030] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.850 [2024-07-15 21:28:14.443037] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.850 [2024-07-15 21:28:14.443043] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.850 [2024-07-15 21:28:14.443049] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.850 [2024-07-15 21:28:14.443181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.850 [2024-07-15 21:28:14.443202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.850 [2024-07-15 21:28:14.443350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.850 [2024-07-15 21:28:14.443351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.420 21:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:25.420 21:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:25.420 21:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:26.361 21:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:26.621 21:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:26.621 21:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:26.621 21:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:26.621 21:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:26.621 21:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:26.882 Malloc1 00:12:26.882 21:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:26.882 21:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:27.142 21:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:27.403 21:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:27.403 21:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:27.403 21:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:27.403 Malloc2 00:12:27.403 21:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:27.664 21:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:27.664 21:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:27.925 21:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:27.925 21:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:27.925 21:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:27.925 21:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:27.925 21:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:27.925 21:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:27.925 [2024-07-15 21:28:17.649177] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:12:27.925 [2024-07-15 21:28:17.649222] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2083177 ] 00:12:27.925 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.925 [2024-07-15 21:28:17.679756] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:27.925 [2024-07-15 21:28:17.688487] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:27.925 [2024-07-15 21:28:17.688507] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f86c6c3e000 00:12:27.925 [2024-07-15 21:28:17.689487] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.925 [2024-07-15 21:28:17.690496] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.925 [2024-07-15 21:28:17.691490] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.925 [2024-07-15 21:28:17.692507] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:27.925 [2024-07-15 21:28:17.693510] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:27.925 [2024-07-15 21:28:17.694512] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.925 [2024-07-15 21:28:17.695519] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:27.925 [2024-07-15 21:28:17.696525] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.925 [2024-07-15 21:28:17.697535] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:27.925 [2024-07-15 21:28:17.697546] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f86c6c33000 00:12:27.925 [2024-07-15 21:28:17.698874] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:27.925 [2024-07-15 21:28:17.719786] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:27.925 [2024-07-15 21:28:17.719812] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:27.925 [2024-07-15 21:28:17.722672] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:27.925 [2024-07-15 21:28:17.722722] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:27.925 [2024-07-15 21:28:17.722808] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:27.925 [2024-07-15 21:28:17.722824] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:27.925 [2024-07-15 21:28:17.722830] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:27.925 [2024-07-15 21:28:17.723670] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:27.925 [2024-07-15 21:28:17.723679] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:27.925 [2024-07-15 21:28:17.723686] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:27.925 [2024-07-15 21:28:17.724675] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:27.925 [2024-07-15 21:28:17.724684] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:27.925 [2024-07-15 21:28:17.724691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:27.925 [2024-07-15 21:28:17.725674] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:27.925 [2024-07-15 21:28:17.725683] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:27.925 [2024-07-15 21:28:17.726681] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:27.925 [2024-07-15 21:28:17.726690] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:27.926 [2024-07-15 21:28:17.726694] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:27.926 [2024-07-15 21:28:17.726701] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:27.926 [2024-07-15 21:28:17.726806] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:27.926 [2024-07-15 21:28:17.726811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:27.926 [2024-07-15 21:28:17.726816] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:27.926 [2024-07-15 21:28:17.727682] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:27.926 [2024-07-15 21:28:17.728690] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:27.926 [2024-07-15 21:28:17.729697] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:28.187 [2024-07-15 21:28:17.730698] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:28.187 [2024-07-15 21:28:17.730751] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:28.187 [2024-07-15 21:28:17.731713] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:28.187 [2024-07-15 21:28:17.731720] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:28.187 [2024-07-15 21:28:17.731725] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:28.187 [2024-07-15 21:28:17.731746] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:28.187 [2024-07-15 21:28:17.731753] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:28.187 [2024-07-15 21:28:17.731767] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:28.187 [2024-07-15 21:28:17.731772] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:28.187 [2024-07-15 21:28:17.731784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:28.187 [2024-07-15 21:28:17.731817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:28.187 [2024-07-15 21:28:17.731826] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:28.188 [2024-07-15 21:28:17.731833] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:28.188 [2024-07-15 21:28:17.731840] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:28.188 [2024-07-15 21:28:17.731844] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:28.188 [2024-07-15 21:28:17.731849] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:28.188 [2024-07-15 21:28:17.731853] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:28.188 [2024-07-15 21:28:17.731858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.731866] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.731876] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:28.188 [2024-07-15 21:28:17.731883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:28.188 [2024-07-15 21:28:17.731897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.188 [2024-07-15 21:28:17.731905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.188 [2024-07-15 21:28:17.731914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.188 [2024-07-15 21:28:17.731922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.188 [2024-07-15 21:28:17.731927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.731937] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.731946] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:28.188 [2024-07-15 21:28:17.731958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:28.188 [2024-07-15 21:28:17.731964] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:28.188 [2024-07-15 21:28:17.731968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.731975] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.731981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.731989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:28.188 [2024-07-15 21:28:17.732001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:28.188 [2024-07-15 21:28:17.732062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.732070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.732078] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:28.188 [2024-07-15 21:28:17.732084] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:28.188 [2024-07-15 21:28:17.732090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:28.188 [2024-07-15 21:28:17.732105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:28.188 [2024-07-15 21:28:17.732114] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:28.188 [2024-07-15 21:28:17.732129] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.732137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.732144] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:28.188 [2024-07-15 21:28:17.732148] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:28.188 [2024-07-15 21:28:17.732154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:28.188 [2024-07-15 21:28:17.732168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:28.188 [2024-07-15 21:28:17.732181] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.732188] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.732195] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:28.188 [2024-07-15 21:28:17.732200] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:28.188 [2024-07-15 21:28:17.732206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:28.188 [2024-07-15 21:28:17.732215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:28.188 [2024-07-15 21:28:17.732223] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.732229] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.732236] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.732242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.732247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.732252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.732257] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:28.188 [2024-07-15 21:28:17.732261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:28.188 [2024-07-15 21:28:17.732266] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:28.188 [2024-07-15 21:28:17.732284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:28.188 [2024-07-15 21:28:17.732294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:28.188 [2024-07-15 21:28:17.732306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:28.188 [2024-07-15 21:28:17.732313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:28.188 [2024-07-15 21:28:17.732324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:28.188 [2024-07-15 21:28:17.732334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:28.188 [2024-07-15 21:28:17.732345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:28.188 [2024-07-15 21:28:17.732357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:28.188 [2024-07-15 21:28:17.732370] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:28.188 [2024-07-15 21:28:17.732374] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:28.188 [2024-07-15 21:28:17.732378] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:28.188 [2024-07-15 21:28:17.732382] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:28.188 [2024-07-15 21:28:17.732388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:28.188 [2024-07-15 21:28:17.732396] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:28.188 [2024-07-15 21:28:17.732400] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:28.188 [2024-07-15 21:28:17.732406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:28.188 [2024-07-15 21:28:17.732413] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:28.188 [2024-07-15 21:28:17.732417] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:28.188 [2024-07-15 21:28:17.732423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:28.188 [2024-07-15 21:28:17.732431] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:28.188 [2024-07-15 21:28:17.732436] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:28.188 [2024-07-15 21:28:17.732441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:28.188 [2024-07-15 21:28:17.732449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:28.188 [2024-07-15 21:28:17.732460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:28.188 [2024-07-15 21:28:17.732471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:28.188 [2024-07-15 21:28:17.732478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:28.188 ===================================================== 00:12:28.188 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:28.188 ===================================================== 00:12:28.188 Controller Capabilities/Features 00:12:28.188 ================================ 00:12:28.188 Vendor ID: 4e58 00:12:28.188 Subsystem Vendor ID: 4e58 00:12:28.188 Serial Number: SPDK1 00:12:28.188 Model Number: SPDK bdev Controller 00:12:28.188 Firmware Version: 24.09 00:12:28.188 Recommended Arb Burst: 6 00:12:28.188 IEEE OUI Identifier: 8d 6b 50 00:12:28.188 Multi-path I/O 00:12:28.188 May have multiple subsystem ports: Yes 00:12:28.188 May have multiple controllers: Yes 00:12:28.188 Associated with SR-IOV VF: No 00:12:28.188 Max Data Transfer Size: 131072 00:12:28.188 Max Number of Namespaces: 32 00:12:28.188 Max Number of I/O Queues: 127 00:12:28.188 NVMe Specification Version (VS): 1.3 00:12:28.188 NVMe Specification Version (Identify): 1.3 00:12:28.188 Maximum Queue Entries: 256 00:12:28.188 Contiguous Queues Required: Yes 00:12:28.188 Arbitration Mechanisms Supported 00:12:28.188 Weighted Round Robin: Not Supported 00:12:28.188 Vendor Specific: Not Supported 00:12:28.188 Reset Timeout: 15000 ms 00:12:28.188 Doorbell Stride: 4 bytes 00:12:28.188 NVM Subsystem Reset: Not Supported 00:12:28.188 Command Sets Supported 00:12:28.188 NVM Command Set: Supported 00:12:28.188 Boot Partition: Not Supported 00:12:28.188 Memory Page Size Minimum: 4096 bytes 00:12:28.188 Memory Page Size Maximum: 4096 bytes 00:12:28.188 Persistent Memory Region: Not Supported 00:12:28.188 Optional Asynchronous Events Supported 00:12:28.188 Namespace Attribute Notices: Supported 00:12:28.188 Firmware Activation Notices: Not Supported 00:12:28.188 ANA Change Notices: Not Supported 00:12:28.188 PLE Aggregate Log Change Notices: Not Supported 00:12:28.188 LBA Status Info Alert Notices: Not Supported 00:12:28.188 EGE Aggregate Log Change Notices: Not Supported 00:12:28.188 Normal NVM Subsystem Shutdown event: Not Supported 00:12:28.188 Zone Descriptor Change Notices: Not Supported 00:12:28.188 Discovery Log Change Notices: Not Supported 00:12:28.188 Controller Attributes 00:12:28.188 128-bit Host Identifier: Supported 00:12:28.188 Non-Operational Permissive Mode: Not Supported 00:12:28.188 NVM Sets: Not Supported 00:12:28.188 Read Recovery Levels: Not Supported 00:12:28.188 Endurance Groups: Not Supported 00:12:28.188 Predictable Latency Mode: Not Supported 00:12:28.188 Traffic Based Keep ALive: Not Supported 00:12:28.188 Namespace Granularity: Not Supported 00:12:28.188 SQ Associations: Not Supported 00:12:28.188 UUID List: Not Supported 00:12:28.188 Multi-Domain Subsystem: Not Supported 00:12:28.188 Fixed Capacity Management: Not Supported 00:12:28.188 Variable Capacity Management: Not Supported 00:12:28.188 Delete Endurance Group: Not Supported 00:12:28.188 Delete NVM Set: Not Supported 00:12:28.188 Extended LBA Formats Supported: Not Supported 00:12:28.188 Flexible Data Placement Supported: Not Supported 00:12:28.188 00:12:28.188 Controller Memory Buffer Support 00:12:28.188 ================================ 00:12:28.188 Supported: No 00:12:28.188 00:12:28.188 Persistent Memory Region Support 00:12:28.188 ================================ 00:12:28.189 Supported: No 00:12:28.189 00:12:28.189 Admin Command Set Attributes 00:12:28.189 ============================ 00:12:28.189 Security Send/Receive: Not Supported 00:12:28.189 Format NVM: Not Supported 00:12:28.189 Firmware Activate/Download: Not Supported 00:12:28.189 Namespace Management: Not Supported 00:12:28.189 Device Self-Test: Not Supported 00:12:28.189 Directives: Not Supported 00:12:28.189 NVMe-MI: Not Supported 00:12:28.189 Virtualization Management: Not Supported 00:12:28.189 Doorbell Buffer Config: Not Supported 00:12:28.189 Get LBA Status Capability: Not Supported 00:12:28.189 Command & Feature Lockdown Capability: Not Supported 00:12:28.189 Abort Command Limit: 4 00:12:28.189 Async Event Request Limit: 4 00:12:28.189 Number of Firmware Slots: N/A 00:12:28.189 Firmware Slot 1 Read-Only: N/A 00:12:28.189 Firmware Activation Without Reset: N/A 00:12:28.189 Multiple Update Detection Support: N/A 00:12:28.189 Firmware Update Granularity: No Information Provided 00:12:28.189 Per-Namespace SMART Log: No 00:12:28.189 Asymmetric Namespace Access Log Page: Not Supported 00:12:28.189 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:28.189 Command Effects Log Page: Supported 00:12:28.189 Get Log Page Extended Data: Supported 00:12:28.189 Telemetry Log Pages: Not Supported 00:12:28.189 Persistent Event Log Pages: Not Supported 00:12:28.189 Supported Log Pages Log Page: May Support 00:12:28.189 Commands Supported & Effects Log Page: Not Supported 00:12:28.189 Feature Identifiers & Effects Log Page:May Support 00:12:28.189 NVMe-MI Commands & Effects Log Page: May Support 00:12:28.189 Data Area 4 for Telemetry Log: Not Supported 00:12:28.189 Error Log Page Entries Supported: 128 00:12:28.189 Keep Alive: Supported 00:12:28.189 Keep Alive Granularity: 10000 ms 00:12:28.189 00:12:28.189 NVM Command Set Attributes 00:12:28.189 ========================== 00:12:28.189 Submission Queue Entry Size 00:12:28.189 Max: 64 00:12:28.189 Min: 64 00:12:28.189 Completion Queue Entry Size 00:12:28.189 Max: 16 00:12:28.189 Min: 16 00:12:28.189 Number of Namespaces: 32 00:12:28.189 Compare Command: Supported 00:12:28.189 Write Uncorrectable Command: Not Supported 00:12:28.189 Dataset Management Command: Supported 00:12:28.189 Write Zeroes Command: Supported 00:12:28.189 Set Features Save Field: Not Supported 00:12:28.189 Reservations: Not Supported 00:12:28.189 Timestamp: Not Supported 00:12:28.189 Copy: Supported 00:12:28.189 Volatile Write Cache: Present 00:12:28.189 Atomic Write Unit (Normal): 1 00:12:28.189 Atomic Write Unit (PFail): 1 00:12:28.189 Atomic Compare & Write Unit: 1 00:12:28.189 Fused Compare & Write: Supported 00:12:28.189 Scatter-Gather List 00:12:28.189 SGL Command Set: Supported (Dword aligned) 00:12:28.189 SGL Keyed: Not Supported 00:12:28.189 SGL Bit Bucket Descriptor: Not Supported 00:12:28.189 SGL Metadata Pointer: Not Supported 00:12:28.189 Oversized SGL: Not Supported 00:12:28.189 SGL Metadata Address: Not Supported 00:12:28.189 SGL Offset: Not Supported 00:12:28.189 Transport SGL Data Block: Not Supported 00:12:28.189 Replay Protected Memory Block: Not Supported 00:12:28.189 00:12:28.189 Firmware Slot Information 00:12:28.189 ========================= 00:12:28.189 Active slot: 1 00:12:28.189 Slot 1 Firmware Revision: 24.09 00:12:28.189 00:12:28.189 00:12:28.189 Commands Supported and Effects 00:12:28.189 ============================== 00:12:28.189 Admin Commands 00:12:28.189 -------------- 00:12:28.189 Get Log Page (02h): Supported 00:12:28.189 Identify (06h): Supported 00:12:28.189 Abort (08h): Supported 00:12:28.189 Set Features (09h): Supported 00:12:28.189 Get Features (0Ah): Supported 00:12:28.189 Asynchronous Event Request (0Ch): Supported 00:12:28.189 Keep Alive (18h): Supported 00:12:28.189 I/O Commands 00:12:28.189 ------------ 00:12:28.189 Flush (00h): Supported LBA-Change 00:12:28.189 Write (01h): Supported LBA-Change 00:12:28.189 Read (02h): Supported 00:12:28.189 Compare (05h): Supported 00:12:28.189 Write Zeroes (08h): Supported LBA-Change 00:12:28.189 Dataset Management (09h): Supported LBA-Change 00:12:28.189 Copy (19h): Supported LBA-Change 00:12:28.189 00:12:28.189 Error Log 00:12:28.189 ========= 00:12:28.189 00:12:28.189 Arbitration 00:12:28.189 =========== 00:12:28.189 Arbitration Burst: 1 00:12:28.189 00:12:28.189 Power Management 00:12:28.189 ================ 00:12:28.189 Number of Power States: 1 00:12:28.189 Current Power State: Power State #0 00:12:28.189 Power State #0: 00:12:28.189 Max Power: 0.00 W 00:12:28.189 Non-Operational State: Operational 00:12:28.189 Entry Latency: Not Reported 00:12:28.189 Exit Latency: Not Reported 00:12:28.189 Relative Read Throughput: 0 00:12:28.189 Relative Read Latency: 0 00:12:28.189 Relative Write Throughput: 0 00:12:28.189 Relative Write Latency: 0 00:12:28.189 Idle Power: Not Reported 00:12:28.189 Active Power: Not Reported 00:12:28.189 Non-Operational Permissive Mode: Not Supported 00:12:28.189 00:12:28.189 Health Information 00:12:28.189 ================== 00:12:28.189 Critical Warnings: 00:12:28.189 Available Spare Space: OK 00:12:28.189 Temperature: OK 00:12:28.189 Device Reliability: OK 00:12:28.189 Read Only: No 00:12:28.189 Volatile Memory Backup: OK 00:12:28.189 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:28.189 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:28.189 Available Spare: 0% 00:12:28.189 Available Sp[2024-07-15 21:28:17.732578] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:28.189 [2024-07-15 21:28:17.732589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:28.189 [2024-07-15 21:28:17.732617] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:28.189 [2024-07-15 21:28:17.732627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.189 [2024-07-15 21:28:17.732633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.189 [2024-07-15 21:28:17.732639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.189 [2024-07-15 21:28:17.732645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.189 [2024-07-15 21:28:17.732718] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:28.189 [2024-07-15 21:28:17.732728] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:28.189 [2024-07-15 21:28:17.733724] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:28.189 [2024-07-15 21:28:17.733764] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:28.189 [2024-07-15 21:28:17.733770] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:28.189 [2024-07-15 21:28:17.734729] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:28.189 [2024-07-15 21:28:17.734741] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:28.189 [2024-07-15 21:28:17.734802] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:28.189 [2024-07-15 21:28:17.740130] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:28.189 are Threshold: 0% 00:12:28.189 Life Percentage Used: 0% 00:12:28.189 Data Units Read: 0 00:12:28.189 Data Units Written: 0 00:12:28.189 Host Read Commands: 0 00:12:28.189 Host Write Commands: 0 00:12:28.189 Controller Busy Time: 0 minutes 00:12:28.189 Power Cycles: 0 00:12:28.189 Power On Hours: 0 hours 00:12:28.189 Unsafe Shutdowns: 0 00:12:28.189 Unrecoverable Media Errors: 0 00:12:28.189 Lifetime Error Log Entries: 0 00:12:28.189 Warning Temperature Time: 0 minutes 00:12:28.189 Critical Temperature Time: 0 minutes 00:12:28.189 00:12:28.189 Number of Queues 00:12:28.189 ================ 00:12:28.189 Number of I/O Submission Queues: 127 00:12:28.189 Number of I/O Completion Queues: 127 00:12:28.189 00:12:28.189 Active Namespaces 00:12:28.189 ================= 00:12:28.189 Namespace ID:1 00:12:28.189 Error Recovery Timeout: Unlimited 00:12:28.189 Command Set Identifier: NVM (00h) 00:12:28.189 Deallocate: Supported 00:12:28.189 Deallocated/Unwritten Error: Not Supported 00:12:28.189 Deallocated Read Value: Unknown 00:12:28.189 Deallocate in Write Zeroes: Not Supported 00:12:28.189 Deallocated Guard Field: 0xFFFF 00:12:28.189 Flush: Supported 00:12:28.189 Reservation: Supported 00:12:28.189 Namespace Sharing Capabilities: Multiple Controllers 00:12:28.189 Size (in LBAs): 131072 (0GiB) 00:12:28.189 Capacity (in LBAs): 131072 (0GiB) 00:12:28.189 Utilization (in LBAs): 131072 (0GiB) 00:12:28.189 NGUID: 6A2323D5864A466BB464C539170A5D73 00:12:28.189 UUID: 6a2323d5-864a-466b-b464-c539170a5d73 00:12:28.189 Thin Provisioning: Not Supported 00:12:28.189 Per-NS Atomic Units: Yes 00:12:28.189 Atomic Boundary Size (Normal): 0 00:12:28.189 Atomic Boundary Size (PFail): 0 00:12:28.189 Atomic Boundary Offset: 0 00:12:28.189 Maximum Single Source Range Length: 65535 00:12:28.189 Maximum Copy Length: 65535 00:12:28.189 Maximum Source Range Count: 1 00:12:28.189 NGUID/EUI64 Never Reused: No 00:12:28.189 Namespace Write Protected: No 00:12:28.189 Number of LBA Formats: 1 00:12:28.189 Current LBA Format: LBA Format #00 00:12:28.189 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:28.189 00:12:28.189 21:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:28.189 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.189 [2024-07-15 21:28:17.923752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:33.477 Initializing NVMe Controllers 00:12:33.477 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:33.477 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:33.477 Initialization complete. Launching workers. 00:12:33.477 ======================================================== 00:12:33.477 Latency(us) 00:12:33.477 Device Information : IOPS MiB/s Average min max 00:12:33.477 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39944.60 156.03 3207.15 843.34 7944.54 00:12:33.477 ======================================================== 00:12:33.477 Total : 39944.60 156.03 3207.15 843.34 7944.54 00:12:33.477 00:12:33.477 [2024-07-15 21:28:22.944379] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:33.477 21:28:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:33.477 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.477 [2024-07-15 21:28:23.124229] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:38.765 Initializing NVMe Controllers 00:12:38.765 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:38.765 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:38.765 Initialization complete. Launching workers. 00:12:38.765 ======================================================== 00:12:38.765 Latency(us) 00:12:38.765 Device Information : IOPS MiB/s Average min max 00:12:38.765 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.70 7626.37 8055.17 00:12:38.765 ======================================================== 00:12:38.765 Total : 16051.20 62.70 7980.70 7626.37 8055.17 00:12:38.765 00:12:38.765 [2024-07-15 21:28:28.160373] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:38.765 21:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:38.765 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.765 [2024-07-15 21:28:28.344270] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:44.048 [2024-07-15 21:28:33.433439] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:44.048 Initializing NVMe Controllers 00:12:44.048 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:44.048 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:44.048 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:44.048 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:44.048 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:44.048 Initialization complete. Launching workers. 00:12:44.048 Starting thread on core 2 00:12:44.048 Starting thread on core 3 00:12:44.048 Starting thread on core 1 00:12:44.048 21:28:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:44.048 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.048 [2024-07-15 21:28:33.690669] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:47.349 [2024-07-15 21:28:36.745750] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:47.349 Initializing NVMe Controllers 00:12:47.349 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:47.349 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:47.349 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:47.349 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:47.349 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:47.349 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:47.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:47.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:47.349 Initialization complete. Launching workers. 00:12:47.349 Starting thread on core 1 with urgent priority queue 00:12:47.349 Starting thread on core 2 with urgent priority queue 00:12:47.349 Starting thread on core 3 with urgent priority queue 00:12:47.349 Starting thread on core 0 with urgent priority queue 00:12:47.349 SPDK bdev Controller (SPDK1 ) core 0: 13772.33 IO/s 7.26 secs/100000 ios 00:12:47.349 SPDK bdev Controller (SPDK1 ) core 1: 13102.67 IO/s 7.63 secs/100000 ios 00:12:47.349 SPDK bdev Controller (SPDK1 ) core 2: 11460.00 IO/s 8.73 secs/100000 ios 00:12:47.349 SPDK bdev Controller (SPDK1 ) core 3: 15699.33 IO/s 6.37 secs/100000 ios 00:12:47.349 ======================================================== 00:12:47.349 00:12:47.349 21:28:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:47.349 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.349 [2024-07-15 21:28:37.006542] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:47.349 Initializing NVMe Controllers 00:12:47.349 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:47.349 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:47.349 Namespace ID: 1 size: 0GB 00:12:47.349 Initialization complete. 00:12:47.349 INFO: using host memory buffer for IO 00:12:47.349 Hello world! 00:12:47.349 [2024-07-15 21:28:37.039750] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:47.349 21:28:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:47.349 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.607 [2024-07-15 21:28:37.304535] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:48.639 Initializing NVMe Controllers 00:12:48.639 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:48.639 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:48.639 Initialization complete. Launching workers. 00:12:48.639 submit (in ns) avg, min, max = 8094.4, 3914.2, 3999483.3 00:12:48.639 complete (in ns) avg, min, max = 18025.7, 2378.3, 5992447.5 00:12:48.639 00:12:48.639 Submit histogram 00:12:48.639 ================ 00:12:48.639 Range in us Cumulative Count 00:12:48.639 3.893 - 3.920: 0.0316% ( 6) 00:12:48.639 3.920 - 3.947: 0.5379% ( 96) 00:12:48.639 3.947 - 3.973: 4.0922% ( 674) 00:12:48.639 3.973 - 4.000: 13.5422% ( 1792) 00:12:48.639 4.000 - 4.027: 23.9835% ( 1980) 00:12:48.639 4.027 - 4.053: 34.5093% ( 1996) 00:12:48.639 4.053 - 4.080: 45.6362% ( 2110) 00:12:48.639 4.080 - 4.107: 60.7815% ( 2872) 00:12:48.639 4.107 - 4.133: 75.2360% ( 2741) 00:12:48.639 4.133 - 4.160: 87.5283% ( 2331) 00:12:48.639 4.160 - 4.187: 94.5156% ( 1325) 00:12:48.639 4.187 - 4.213: 97.6428% ( 593) 00:12:48.639 4.213 - 4.240: 98.8504% ( 229) 00:12:48.639 4.240 - 4.267: 99.2828% ( 82) 00:12:48.639 4.267 - 4.293: 99.4199% ( 26) 00:12:48.639 4.293 - 4.320: 99.4516% ( 6) 00:12:48.639 4.320 - 4.347: 99.4568% ( 1) 00:12:48.639 4.427 - 4.453: 99.4674% ( 2) 00:12:48.639 4.640 - 4.667: 99.4727% ( 1) 00:12:48.639 4.720 - 4.747: 99.4779% ( 1) 00:12:48.639 4.960 - 4.987: 99.4832% ( 1) 00:12:48.639 5.040 - 5.067: 99.4885% ( 1) 00:12:48.639 5.120 - 5.147: 99.4990% ( 2) 00:12:48.639 5.173 - 5.200: 99.5096% ( 2) 00:12:48.639 5.280 - 5.307: 99.5148% ( 1) 00:12:48.639 5.333 - 5.360: 99.5201% ( 1) 00:12:48.639 5.413 - 5.440: 99.5254% ( 1) 00:12:48.639 5.520 - 5.547: 99.5307% ( 1) 00:12:48.639 5.787 - 5.813: 99.5412% ( 2) 00:12:48.639 5.813 - 5.840: 99.5465% ( 1) 00:12:48.639 5.840 - 5.867: 99.5518% ( 1) 00:12:48.639 5.867 - 5.893: 99.5570% ( 1) 00:12:48.639 5.920 - 5.947: 99.5623% ( 1) 00:12:48.639 5.947 - 5.973: 99.5729% ( 2) 00:12:48.639 6.000 - 6.027: 99.5992% ( 5) 00:12:48.639 6.027 - 6.053: 99.6045% ( 1) 00:12:48.639 6.053 - 6.080: 99.6098% ( 1) 00:12:48.639 6.080 - 6.107: 99.6150% ( 1) 00:12:48.639 6.107 - 6.133: 99.6203% ( 1) 00:12:48.639 6.160 - 6.187: 99.6572% ( 7) 00:12:48.639 6.187 - 6.213: 99.6678% ( 2) 00:12:48.639 6.240 - 6.267: 99.6730% ( 1) 00:12:48.639 6.267 - 6.293: 99.6783% ( 1) 00:12:48.639 6.293 - 6.320: 99.6836% ( 1) 00:12:48.639 6.373 - 6.400: 99.6941% ( 2) 00:12:48.639 6.400 - 6.427: 99.6994% ( 1) 00:12:48.639 6.427 - 6.453: 99.7047% ( 1) 00:12:48.639 6.453 - 6.480: 99.7100% ( 1) 00:12:48.639 6.480 - 6.507: 99.7205% ( 2) 00:12:48.639 6.507 - 6.533: 99.7258% ( 1) 00:12:48.639 6.560 - 6.587: 99.7311% ( 1) 00:12:48.639 6.613 - 6.640: 99.7416% ( 2) 00:12:48.639 6.640 - 6.667: 99.7469% ( 1) 00:12:48.639 6.667 - 6.693: 99.7521% ( 1) 00:12:48.639 6.693 - 6.720: 99.7680% ( 3) 00:12:48.639 6.720 - 6.747: 99.7732% ( 1) 00:12:48.639 6.747 - 6.773: 99.7785% ( 1) 00:12:48.639 6.800 - 6.827: 99.7838% ( 1) 00:12:48.639 6.827 - 6.880: 99.7943% ( 2) 00:12:48.639 6.880 - 6.933: 99.7996% ( 1) 00:12:48.639 6.933 - 6.987: 99.8049% ( 1) 00:12:48.639 6.987 - 7.040: 99.8102% ( 1) 00:12:48.639 7.040 - 7.093: 99.8154% ( 1) 00:12:48.639 7.093 - 7.147: 99.8260% ( 2) 00:12:48.639 7.147 - 7.200: 99.8365% ( 2) 00:12:48.639 7.200 - 7.253: 99.8418% ( 1) 00:12:48.639 7.253 - 7.307: 99.8576% ( 3) 00:12:48.639 7.307 - 7.360: 99.8682% ( 2) 00:12:48.640 7.467 - 7.520: 99.8734% ( 1) 00:12:48.640 7.573 - 7.627: 99.8787% ( 1) 00:12:48.640 7.840 - 7.893: 99.8840% ( 1) 00:12:48.640 8.107 - 8.160: 99.8893% ( 1) 00:12:48.640 8.160 - 8.213: 99.8945% ( 1) 00:12:48.640 12.853 - 12.907: 99.8998% ( 1) 00:12:48.640 3986.773 - 4014.080: 100.0000% ( 19) 00:12:48.640 00:12:48.640 Complete histogram 00:12:48.640 ================== 00:12:48.640 Range in us Cumulative Count 00:12:48.640 2.373 - 2.387: 0.0053% ( 1) 00:12:48.640 2.387 - 2.400: 0.0369% ( 6) 00:12:48.640 2.400 - [2024-07-15 21:28:38.324987] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.640 2.413: 1.0283% ( 188) 00:12:48.640 2.413 - 2.427: 1.0969% ( 13) 00:12:48.640 2.427 - 2.440: 1.3289% ( 44) 00:12:48.640 2.440 - 2.453: 6.2807% ( 939) 00:12:48.640 2.453 - 2.467: 52.9505% ( 8850) 00:12:48.640 2.467 - 2.480: 59.3788% ( 1219) 00:12:48.640 2.480 - 2.493: 73.2743% ( 2635) 00:12:48.640 2.493 - 2.507: 80.0612% ( 1287) 00:12:48.640 2.507 - 2.520: 81.9280% ( 354) 00:12:48.640 2.520 - 2.533: 87.5336% ( 1063) 00:12:48.640 2.533 - 2.547: 92.9389% ( 1025) 00:12:48.640 2.547 - 2.560: 95.7918% ( 541) 00:12:48.640 2.560 - 2.573: 97.8801% ( 396) 00:12:48.640 2.573 - 2.587: 98.9980% ( 212) 00:12:48.640 2.587 - 2.600: 99.3514% ( 67) 00:12:48.640 2.600 - 2.613: 99.4041% ( 10) 00:12:48.640 2.613 - 2.627: 99.4199% ( 3) 00:12:48.640 2.680 - 2.693: 99.4252% ( 1) 00:12:48.640 4.267 - 4.293: 99.4305% ( 1) 00:12:48.640 4.347 - 4.373: 99.4357% ( 1) 00:12:48.640 4.560 - 4.587: 99.4463% ( 2) 00:12:48.640 4.613 - 4.640: 99.4568% ( 2) 00:12:48.640 4.640 - 4.667: 99.4674% ( 2) 00:12:48.640 4.667 - 4.693: 99.4727% ( 1) 00:12:48.640 4.693 - 4.720: 99.4779% ( 1) 00:12:48.640 4.800 - 4.827: 99.4832% ( 1) 00:12:48.640 4.827 - 4.853: 99.4938% ( 2) 00:12:48.640 4.960 - 4.987: 99.5096% ( 3) 00:12:48.640 4.987 - 5.013: 99.5148% ( 1) 00:12:48.640 5.040 - 5.067: 99.5201% ( 1) 00:12:48.640 5.200 - 5.227: 99.5254% ( 1) 00:12:48.640 5.253 - 5.280: 99.5307% ( 1) 00:12:48.640 5.360 - 5.387: 99.5359% ( 1) 00:12:48.640 5.387 - 5.413: 99.5465% ( 2) 00:12:48.640 5.547 - 5.573: 99.5518% ( 1) 00:12:48.640 5.653 - 5.680: 99.5570% ( 1) 00:12:48.640 5.707 - 5.733: 99.5676% ( 2) 00:12:48.640 5.973 - 6.000: 99.5781% ( 2) 00:12:48.640 7.093 - 7.147: 99.5887% ( 2) 00:12:48.640 10.133 - 10.187: 99.5939% ( 1) 00:12:48.640 10.933 - 10.987: 99.5992% ( 1) 00:12:48.640 11.360 - 11.413: 99.6045% ( 1) 00:12:48.640 12.693 - 12.747: 99.6098% ( 1) 00:12:48.640 2034.347 - 2048.000: 99.6203% ( 2) 00:12:48.640 3986.773 - 4014.080: 99.9895% ( 70) 00:12:48.640 4969.813 - 4997.120: 99.9947% ( 1) 00:12:48.640 5980.160 - 6007.467: 100.0000% ( 1) 00:12:48.640 00:12:48.640 21:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:48.640 21:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:48.640 21:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:48.640 21:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:48.640 21:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:48.899 [ 00:12:48.899 { 00:12:48.899 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:48.899 "subtype": "Discovery", 00:12:48.899 "listen_addresses": [], 00:12:48.899 "allow_any_host": true, 00:12:48.899 "hosts": [] 00:12:48.899 }, 00:12:48.899 { 00:12:48.899 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:48.899 "subtype": "NVMe", 00:12:48.899 "listen_addresses": [ 00:12:48.899 { 00:12:48.899 "trtype": "VFIOUSER", 00:12:48.899 "adrfam": "IPv4", 00:12:48.899 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:48.899 "trsvcid": "0" 00:12:48.899 } 00:12:48.899 ], 00:12:48.899 "allow_any_host": true, 00:12:48.899 "hosts": [], 00:12:48.899 "serial_number": "SPDK1", 00:12:48.899 "model_number": "SPDK bdev Controller", 00:12:48.899 "max_namespaces": 32, 00:12:48.899 "min_cntlid": 1, 00:12:48.899 "max_cntlid": 65519, 00:12:48.899 "namespaces": [ 00:12:48.899 { 00:12:48.899 "nsid": 1, 00:12:48.899 "bdev_name": "Malloc1", 00:12:48.899 "name": "Malloc1", 00:12:48.899 "nguid": "6A2323D5864A466BB464C539170A5D73", 00:12:48.899 "uuid": "6a2323d5-864a-466b-b464-c539170a5d73" 00:12:48.899 } 00:12:48.899 ] 00:12:48.899 }, 00:12:48.899 { 00:12:48.899 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:48.899 "subtype": "NVMe", 00:12:48.899 "listen_addresses": [ 00:12:48.899 { 00:12:48.899 "trtype": "VFIOUSER", 00:12:48.899 "adrfam": "IPv4", 00:12:48.899 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:48.899 "trsvcid": "0" 00:12:48.899 } 00:12:48.899 ], 00:12:48.899 "allow_any_host": true, 00:12:48.899 "hosts": [], 00:12:48.899 "serial_number": "SPDK2", 00:12:48.900 "model_number": "SPDK bdev Controller", 00:12:48.900 "max_namespaces": 32, 00:12:48.900 "min_cntlid": 1, 00:12:48.900 "max_cntlid": 65519, 00:12:48.900 "namespaces": [ 00:12:48.900 { 00:12:48.900 "nsid": 1, 00:12:48.900 "bdev_name": "Malloc2", 00:12:48.900 "name": "Malloc2", 00:12:48.900 "nguid": "8DDDBB90F6384C65B6644D560B76B7A3", 00:12:48.900 "uuid": "8dddbb90-f638-4c65-b664-4d560b76b7a3" 00:12:48.900 } 00:12:48.900 ] 00:12:48.900 } 00:12:48.900 ] 00:12:48.900 21:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:48.900 21:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2087411 00:12:48.900 21:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:48.900 21:28:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:48.900 21:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:48.900 21:28:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:48.900 21:28:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:48.900 21:28:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:48.900 21:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:48.900 21:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:48.900 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.160 Malloc3 00:12:49.160 [2024-07-15 21:28:38.716549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:49.160 21:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:49.160 [2024-07-15 21:28:38.886655] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:49.160 21:28:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:49.160 Asynchronous Event Request test 00:12:49.160 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:49.160 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:49.160 Registering asynchronous event callbacks... 00:12:49.160 Starting namespace attribute notice tests for all controllers... 00:12:49.160 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:49.160 aer_cb - Changed Namespace 00:12:49.160 Cleaning up... 00:12:49.422 [ 00:12:49.422 { 00:12:49.422 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:49.422 "subtype": "Discovery", 00:12:49.422 "listen_addresses": [], 00:12:49.422 "allow_any_host": true, 00:12:49.422 "hosts": [] 00:12:49.422 }, 00:12:49.422 { 00:12:49.422 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:49.422 "subtype": "NVMe", 00:12:49.422 "listen_addresses": [ 00:12:49.422 { 00:12:49.422 "trtype": "VFIOUSER", 00:12:49.422 "adrfam": "IPv4", 00:12:49.422 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:49.422 "trsvcid": "0" 00:12:49.422 } 00:12:49.422 ], 00:12:49.422 "allow_any_host": true, 00:12:49.422 "hosts": [], 00:12:49.422 "serial_number": "SPDK1", 00:12:49.422 "model_number": "SPDK bdev Controller", 00:12:49.422 "max_namespaces": 32, 00:12:49.422 "min_cntlid": 1, 00:12:49.422 "max_cntlid": 65519, 00:12:49.422 "namespaces": [ 00:12:49.422 { 00:12:49.422 "nsid": 1, 00:12:49.422 "bdev_name": "Malloc1", 00:12:49.422 "name": "Malloc1", 00:12:49.422 "nguid": "6A2323D5864A466BB464C539170A5D73", 00:12:49.422 "uuid": "6a2323d5-864a-466b-b464-c539170a5d73" 00:12:49.422 }, 00:12:49.422 { 00:12:49.422 "nsid": 2, 00:12:49.422 "bdev_name": "Malloc3", 00:12:49.422 "name": "Malloc3", 00:12:49.422 "nguid": "676B27BDAC214EFDAD76DF282C996B68", 00:12:49.422 "uuid": "676b27bd-ac21-4efd-ad76-df282c996b68" 00:12:49.422 } 00:12:49.422 ] 00:12:49.422 }, 00:12:49.422 { 00:12:49.422 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:49.422 "subtype": "NVMe", 00:12:49.422 "listen_addresses": [ 00:12:49.422 { 00:12:49.422 "trtype": "VFIOUSER", 00:12:49.422 "adrfam": "IPv4", 00:12:49.422 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:49.422 "trsvcid": "0" 00:12:49.422 } 00:12:49.422 ], 00:12:49.422 "allow_any_host": true, 00:12:49.422 "hosts": [], 00:12:49.422 "serial_number": "SPDK2", 00:12:49.422 "model_number": "SPDK bdev Controller", 00:12:49.422 "max_namespaces": 32, 00:12:49.422 "min_cntlid": 1, 00:12:49.422 "max_cntlid": 65519, 00:12:49.422 "namespaces": [ 00:12:49.422 { 00:12:49.422 "nsid": 1, 00:12:49.422 "bdev_name": "Malloc2", 00:12:49.422 "name": "Malloc2", 00:12:49.422 "nguid": "8DDDBB90F6384C65B6644D560B76B7A3", 00:12:49.422 "uuid": "8dddbb90-f638-4c65-b664-4d560b76b7a3" 00:12:49.422 } 00:12:49.422 ] 00:12:49.422 } 00:12:49.422 ] 00:12:49.422 21:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2087411 00:12:49.422 21:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:49.422 21:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:49.422 21:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:49.422 21:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:49.422 [2024-07-15 21:28:39.104496] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:12:49.422 [2024-07-15 21:28:39.104534] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087534 ] 00:12:49.422 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.422 [2024-07-15 21:28:39.136660] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:49.422 [2024-07-15 21:28:39.145349] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:49.422 [2024-07-15 21:28:39.145371] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8e6d495000 00:12:49.422 [2024-07-15 21:28:39.146348] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:49.422 [2024-07-15 21:28:39.147355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:49.422 [2024-07-15 21:28:39.148358] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:49.422 [2024-07-15 21:28:39.149368] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:49.422 [2024-07-15 21:28:39.150376] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:49.422 [2024-07-15 21:28:39.151384] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:49.422 [2024-07-15 21:28:39.152393] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:49.422 [2024-07-15 21:28:39.153397] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:49.422 [2024-07-15 21:28:39.154411] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:49.422 [2024-07-15 21:28:39.154421] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8e6d48a000 00:12:49.422 [2024-07-15 21:28:39.155748] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:49.422 [2024-07-15 21:28:39.171953] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:49.422 [2024-07-15 21:28:39.171972] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:49.422 [2024-07-15 21:28:39.177050] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:49.422 [2024-07-15 21:28:39.177094] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:49.422 [2024-07-15 21:28:39.177176] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:49.422 [2024-07-15 21:28:39.177191] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:49.422 [2024-07-15 21:28:39.177197] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:49.422 [2024-07-15 21:28:39.178051] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:49.422 [2024-07-15 21:28:39.178060] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:49.422 [2024-07-15 21:28:39.178068] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:49.422 [2024-07-15 21:28:39.179054] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:49.422 [2024-07-15 21:28:39.179063] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:49.422 [2024-07-15 21:28:39.179071] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:49.422 [2024-07-15 21:28:39.180063] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:49.422 [2024-07-15 21:28:39.180073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:49.422 [2024-07-15 21:28:39.181069] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:49.422 [2024-07-15 21:28:39.181077] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:49.422 [2024-07-15 21:28:39.181082] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:49.422 [2024-07-15 21:28:39.181088] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:49.422 [2024-07-15 21:28:39.181194] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:49.422 [2024-07-15 21:28:39.181199] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:49.422 [2024-07-15 21:28:39.181204] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:49.422 [2024-07-15 21:28:39.182072] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:49.422 [2024-07-15 21:28:39.183078] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:49.422 [2024-07-15 21:28:39.184084] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:49.422 [2024-07-15 21:28:39.185091] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:49.422 [2024-07-15 21:28:39.185136] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:49.422 [2024-07-15 21:28:39.186099] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:49.422 [2024-07-15 21:28:39.186107] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:49.422 [2024-07-15 21:28:39.186112] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:49.422 [2024-07-15 21:28:39.186136] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:49.422 [2024-07-15 21:28:39.186147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:49.422 [2024-07-15 21:28:39.186162] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:49.422 [2024-07-15 21:28:39.186167] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:49.422 [2024-07-15 21:28:39.186178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:49.422 [2024-07-15 21:28:39.194131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:49.422 [2024-07-15 21:28:39.194143] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:49.422 [2024-07-15 21:28:39.194150] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:49.423 [2024-07-15 21:28:39.194154] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:49.423 [2024-07-15 21:28:39.194159] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:49.423 [2024-07-15 21:28:39.194163] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:49.423 [2024-07-15 21:28:39.194168] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:49.423 [2024-07-15 21:28:39.194172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:49.423 [2024-07-15 21:28:39.194180] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:49.423 [2024-07-15 21:28:39.194189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:49.423 [2024-07-15 21:28:39.202129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:49.423 [2024-07-15 21:28:39.202143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.423 [2024-07-15 21:28:39.202152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.423 [2024-07-15 21:28:39.202160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.423 [2024-07-15 21:28:39.202169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.423 [2024-07-15 21:28:39.202173] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:49.423 [2024-07-15 21:28:39.202181] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:49.423 [2024-07-15 21:28:39.202190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:49.423 [2024-07-15 21:28:39.210128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:49.423 [2024-07-15 21:28:39.210136] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:49.423 [2024-07-15 21:28:39.210141] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:49.423 [2024-07-15 21:28:39.210148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:49.423 [2024-07-15 21:28:39.210156] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:49.423 [2024-07-15 21:28:39.210165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:49.423 [2024-07-15 21:28:39.218128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:49.423 [2024-07-15 21:28:39.218193] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:49.423 [2024-07-15 21:28:39.218201] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:49.423 [2024-07-15 21:28:39.218208] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:49.423 [2024-07-15 21:28:39.218212] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:49.423 [2024-07-15 21:28:39.218219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:49.684 [2024-07-15 21:28:39.226127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:49.684 [2024-07-15 21:28:39.226137] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:49.684 [2024-07-15 21:28:39.226146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:49.684 [2024-07-15 21:28:39.226153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:49.684 [2024-07-15 21:28:39.226160] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:49.684 [2024-07-15 21:28:39.226165] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:49.684 [2024-07-15 21:28:39.226171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:49.684 [2024-07-15 21:28:39.234130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:49.684 [2024-07-15 21:28:39.234143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:49.684 [2024-07-15 21:28:39.234150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:49.684 [2024-07-15 21:28:39.234158] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:49.684 [2024-07-15 21:28:39.234162] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:49.684 [2024-07-15 21:28:39.234168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:49.684 [2024-07-15 21:28:39.242127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:49.684 [2024-07-15 21:28:39.242136] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:49.684 [2024-07-15 21:28:39.242142] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:49.684 [2024-07-15 21:28:39.242150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:49.684 [2024-07-15 21:28:39.242155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:49.684 [2024-07-15 21:28:39.242162] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:49.684 [2024-07-15 21:28:39.242167] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:49.684 [2024-07-15 21:28:39.242172] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:49.684 [2024-07-15 21:28:39.242176] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:49.684 [2024-07-15 21:28:39.242181] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:49.684 [2024-07-15 21:28:39.242198] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:49.684 [2024-07-15 21:28:39.250127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:49.684 [2024-07-15 21:28:39.250141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:49.684 [2024-07-15 21:28:39.258128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:49.684 [2024-07-15 21:28:39.258141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:49.684 [2024-07-15 21:28:39.266129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:49.684 [2024-07-15 21:28:39.266141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:49.684 [2024-07-15 21:28:39.274126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:49.684 [2024-07-15 21:28:39.274144] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:49.684 [2024-07-15 21:28:39.274149] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:49.685 [2024-07-15 21:28:39.274152] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:49.685 [2024-07-15 21:28:39.274156] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:49.685 [2024-07-15 21:28:39.274162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:49.685 [2024-07-15 21:28:39.274170] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:49.685 [2024-07-15 21:28:39.274174] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:49.685 [2024-07-15 21:28:39.274179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:49.685 [2024-07-15 21:28:39.274187] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:49.685 [2024-07-15 21:28:39.274191] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:49.685 [2024-07-15 21:28:39.274197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:49.685 [2024-07-15 21:28:39.274204] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:49.685 [2024-07-15 21:28:39.274208] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:49.685 [2024-07-15 21:28:39.274214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:49.685 [2024-07-15 21:28:39.282129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:49.685 [2024-07-15 21:28:39.282144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:49.685 [2024-07-15 21:28:39.282154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:49.685 [2024-07-15 21:28:39.282161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:49.685 ===================================================== 00:12:49.685 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:49.685 ===================================================== 00:12:49.685 Controller Capabilities/Features 00:12:49.685 ================================ 00:12:49.685 Vendor ID: 4e58 00:12:49.685 Subsystem Vendor ID: 4e58 00:12:49.685 Serial Number: SPDK2 00:12:49.685 Model Number: SPDK bdev Controller 00:12:49.685 Firmware Version: 24.09 00:12:49.685 Recommended Arb Burst: 6 00:12:49.685 IEEE OUI Identifier: 8d 6b 50 00:12:49.685 Multi-path I/O 00:12:49.685 May have multiple subsystem ports: Yes 00:12:49.685 May have multiple controllers: Yes 00:12:49.685 Associated with SR-IOV VF: No 00:12:49.685 Max Data Transfer Size: 131072 00:12:49.685 Max Number of Namespaces: 32 00:12:49.685 Max Number of I/O Queues: 127 00:12:49.685 NVMe Specification Version (VS): 1.3 00:12:49.685 NVMe Specification Version (Identify): 1.3 00:12:49.685 Maximum Queue Entries: 256 00:12:49.685 Contiguous Queues Required: Yes 00:12:49.685 Arbitration Mechanisms Supported 00:12:49.685 Weighted Round Robin: Not Supported 00:12:49.685 Vendor Specific: Not Supported 00:12:49.685 Reset Timeout: 15000 ms 00:12:49.685 Doorbell Stride: 4 bytes 00:12:49.685 NVM Subsystem Reset: Not Supported 00:12:49.685 Command Sets Supported 00:12:49.685 NVM Command Set: Supported 00:12:49.685 Boot Partition: Not Supported 00:12:49.685 Memory Page Size Minimum: 4096 bytes 00:12:49.685 Memory Page Size Maximum: 4096 bytes 00:12:49.685 Persistent Memory Region: Not Supported 00:12:49.685 Optional Asynchronous Events Supported 00:12:49.685 Namespace Attribute Notices: Supported 00:12:49.685 Firmware Activation Notices: Not Supported 00:12:49.685 ANA Change Notices: Not Supported 00:12:49.685 PLE Aggregate Log Change Notices: Not Supported 00:12:49.685 LBA Status Info Alert Notices: Not Supported 00:12:49.685 EGE Aggregate Log Change Notices: Not Supported 00:12:49.685 Normal NVM Subsystem Shutdown event: Not Supported 00:12:49.685 Zone Descriptor Change Notices: Not Supported 00:12:49.685 Discovery Log Change Notices: Not Supported 00:12:49.685 Controller Attributes 00:12:49.685 128-bit Host Identifier: Supported 00:12:49.685 Non-Operational Permissive Mode: Not Supported 00:12:49.685 NVM Sets: Not Supported 00:12:49.685 Read Recovery Levels: Not Supported 00:12:49.685 Endurance Groups: Not Supported 00:12:49.685 Predictable Latency Mode: Not Supported 00:12:49.685 Traffic Based Keep ALive: Not Supported 00:12:49.685 Namespace Granularity: Not Supported 00:12:49.685 SQ Associations: Not Supported 00:12:49.685 UUID List: Not Supported 00:12:49.685 Multi-Domain Subsystem: Not Supported 00:12:49.685 Fixed Capacity Management: Not Supported 00:12:49.685 Variable Capacity Management: Not Supported 00:12:49.685 Delete Endurance Group: Not Supported 00:12:49.685 Delete NVM Set: Not Supported 00:12:49.685 Extended LBA Formats Supported: Not Supported 00:12:49.685 Flexible Data Placement Supported: Not Supported 00:12:49.685 00:12:49.685 Controller Memory Buffer Support 00:12:49.685 ================================ 00:12:49.685 Supported: No 00:12:49.685 00:12:49.685 Persistent Memory Region Support 00:12:49.685 ================================ 00:12:49.685 Supported: No 00:12:49.685 00:12:49.685 Admin Command Set Attributes 00:12:49.685 ============================ 00:12:49.685 Security Send/Receive: Not Supported 00:12:49.685 Format NVM: Not Supported 00:12:49.685 Firmware Activate/Download: Not Supported 00:12:49.685 Namespace Management: Not Supported 00:12:49.685 Device Self-Test: Not Supported 00:12:49.685 Directives: Not Supported 00:12:49.685 NVMe-MI: Not Supported 00:12:49.685 Virtualization Management: Not Supported 00:12:49.685 Doorbell Buffer Config: Not Supported 00:12:49.685 Get LBA Status Capability: Not Supported 00:12:49.685 Command & Feature Lockdown Capability: Not Supported 00:12:49.685 Abort Command Limit: 4 00:12:49.685 Async Event Request Limit: 4 00:12:49.685 Number of Firmware Slots: N/A 00:12:49.685 Firmware Slot 1 Read-Only: N/A 00:12:49.685 Firmware Activation Without Reset: N/A 00:12:49.685 Multiple Update Detection Support: N/A 00:12:49.685 Firmware Update Granularity: No Information Provided 00:12:49.685 Per-Namespace SMART Log: No 00:12:49.685 Asymmetric Namespace Access Log Page: Not Supported 00:12:49.685 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:49.685 Command Effects Log Page: Supported 00:12:49.685 Get Log Page Extended Data: Supported 00:12:49.685 Telemetry Log Pages: Not Supported 00:12:49.685 Persistent Event Log Pages: Not Supported 00:12:49.685 Supported Log Pages Log Page: May Support 00:12:49.685 Commands Supported & Effects Log Page: Not Supported 00:12:49.685 Feature Identifiers & Effects Log Page:May Support 00:12:49.685 NVMe-MI Commands & Effects Log Page: May Support 00:12:49.685 Data Area 4 for Telemetry Log: Not Supported 00:12:49.685 Error Log Page Entries Supported: 128 00:12:49.685 Keep Alive: Supported 00:12:49.685 Keep Alive Granularity: 10000 ms 00:12:49.685 00:12:49.685 NVM Command Set Attributes 00:12:49.685 ========================== 00:12:49.685 Submission Queue Entry Size 00:12:49.685 Max: 64 00:12:49.685 Min: 64 00:12:49.685 Completion Queue Entry Size 00:12:49.685 Max: 16 00:12:49.685 Min: 16 00:12:49.685 Number of Namespaces: 32 00:12:49.685 Compare Command: Supported 00:12:49.685 Write Uncorrectable Command: Not Supported 00:12:49.685 Dataset Management Command: Supported 00:12:49.685 Write Zeroes Command: Supported 00:12:49.685 Set Features Save Field: Not Supported 00:12:49.685 Reservations: Not Supported 00:12:49.685 Timestamp: Not Supported 00:12:49.685 Copy: Supported 00:12:49.685 Volatile Write Cache: Present 00:12:49.685 Atomic Write Unit (Normal): 1 00:12:49.685 Atomic Write Unit (PFail): 1 00:12:49.685 Atomic Compare & Write Unit: 1 00:12:49.685 Fused Compare & Write: Supported 00:12:49.685 Scatter-Gather List 00:12:49.685 SGL Command Set: Supported (Dword aligned) 00:12:49.685 SGL Keyed: Not Supported 00:12:49.685 SGL Bit Bucket Descriptor: Not Supported 00:12:49.685 SGL Metadata Pointer: Not Supported 00:12:49.685 Oversized SGL: Not Supported 00:12:49.685 SGL Metadata Address: Not Supported 00:12:49.685 SGL Offset: Not Supported 00:12:49.685 Transport SGL Data Block: Not Supported 00:12:49.685 Replay Protected Memory Block: Not Supported 00:12:49.685 00:12:49.685 Firmware Slot Information 00:12:49.685 ========================= 00:12:49.685 Active slot: 1 00:12:49.685 Slot 1 Firmware Revision: 24.09 00:12:49.685 00:12:49.685 00:12:49.685 Commands Supported and Effects 00:12:49.685 ============================== 00:12:49.685 Admin Commands 00:12:49.685 -------------- 00:12:49.685 Get Log Page (02h): Supported 00:12:49.685 Identify (06h): Supported 00:12:49.685 Abort (08h): Supported 00:12:49.686 Set Features (09h): Supported 00:12:49.686 Get Features (0Ah): Supported 00:12:49.686 Asynchronous Event Request (0Ch): Supported 00:12:49.686 Keep Alive (18h): Supported 00:12:49.686 I/O Commands 00:12:49.686 ------------ 00:12:49.686 Flush (00h): Supported LBA-Change 00:12:49.686 Write (01h): Supported LBA-Change 00:12:49.686 Read (02h): Supported 00:12:49.686 Compare (05h): Supported 00:12:49.686 Write Zeroes (08h): Supported LBA-Change 00:12:49.686 Dataset Management (09h): Supported LBA-Change 00:12:49.686 Copy (19h): Supported LBA-Change 00:12:49.686 00:12:49.686 Error Log 00:12:49.686 ========= 00:12:49.686 00:12:49.686 Arbitration 00:12:49.686 =========== 00:12:49.686 Arbitration Burst: 1 00:12:49.686 00:12:49.686 Power Management 00:12:49.686 ================ 00:12:49.686 Number of Power States: 1 00:12:49.686 Current Power State: Power State #0 00:12:49.686 Power State #0: 00:12:49.686 Max Power: 0.00 W 00:12:49.686 Non-Operational State: Operational 00:12:49.686 Entry Latency: Not Reported 00:12:49.686 Exit Latency: Not Reported 00:12:49.686 Relative Read Throughput: 0 00:12:49.686 Relative Read Latency: 0 00:12:49.686 Relative Write Throughput: 0 00:12:49.686 Relative Write Latency: 0 00:12:49.686 Idle Power: Not Reported 00:12:49.686 Active Power: Not Reported 00:12:49.686 Non-Operational Permissive Mode: Not Supported 00:12:49.686 00:12:49.686 Health Information 00:12:49.686 ================== 00:12:49.686 Critical Warnings: 00:12:49.686 Available Spare Space: OK 00:12:49.686 Temperature: OK 00:12:49.686 Device Reliability: OK 00:12:49.686 Read Only: No 00:12:49.686 Volatile Memory Backup: OK 00:12:49.686 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:49.686 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:49.686 Available Spare: 0% 00:12:49.686 Available Sp[2024-07-15 21:28:39.282256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:49.686 [2024-07-15 21:28:39.290128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:49.686 [2024-07-15 21:28:39.290159] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:49.686 [2024-07-15 21:28:39.290168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.686 [2024-07-15 21:28:39.290174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.686 [2024-07-15 21:28:39.290181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.686 [2024-07-15 21:28:39.290187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.686 [2024-07-15 21:28:39.290242] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:49.686 [2024-07-15 21:28:39.290253] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:49.686 [2024-07-15 21:28:39.291245] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:49.686 [2024-07-15 21:28:39.291293] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:49.686 [2024-07-15 21:28:39.291300] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:49.686 [2024-07-15 21:28:39.292246] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:49.686 [2024-07-15 21:28:39.292257] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:49.686 [2024-07-15 21:28:39.292307] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:49.686 [2024-07-15 21:28:39.293689] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:49.686 are Threshold: 0% 00:12:49.686 Life Percentage Used: 0% 00:12:49.686 Data Units Read: 0 00:12:49.686 Data Units Written: 0 00:12:49.686 Host Read Commands: 0 00:12:49.686 Host Write Commands: 0 00:12:49.686 Controller Busy Time: 0 minutes 00:12:49.686 Power Cycles: 0 00:12:49.686 Power On Hours: 0 hours 00:12:49.686 Unsafe Shutdowns: 0 00:12:49.686 Unrecoverable Media Errors: 0 00:12:49.686 Lifetime Error Log Entries: 0 00:12:49.686 Warning Temperature Time: 0 minutes 00:12:49.686 Critical Temperature Time: 0 minutes 00:12:49.686 00:12:49.686 Number of Queues 00:12:49.686 ================ 00:12:49.686 Number of I/O Submission Queues: 127 00:12:49.686 Number of I/O Completion Queues: 127 00:12:49.686 00:12:49.686 Active Namespaces 00:12:49.686 ================= 00:12:49.686 Namespace ID:1 00:12:49.686 Error Recovery Timeout: Unlimited 00:12:49.686 Command Set Identifier: NVM (00h) 00:12:49.686 Deallocate: Supported 00:12:49.686 Deallocated/Unwritten Error: Not Supported 00:12:49.686 Deallocated Read Value: Unknown 00:12:49.686 Deallocate in Write Zeroes: Not Supported 00:12:49.686 Deallocated Guard Field: 0xFFFF 00:12:49.686 Flush: Supported 00:12:49.686 Reservation: Supported 00:12:49.686 Namespace Sharing Capabilities: Multiple Controllers 00:12:49.686 Size (in LBAs): 131072 (0GiB) 00:12:49.686 Capacity (in LBAs): 131072 (0GiB) 00:12:49.686 Utilization (in LBAs): 131072 (0GiB) 00:12:49.686 NGUID: 8DDDBB90F6384C65B6644D560B76B7A3 00:12:49.686 UUID: 8dddbb90-f638-4c65-b664-4d560b76b7a3 00:12:49.686 Thin Provisioning: Not Supported 00:12:49.686 Per-NS Atomic Units: Yes 00:12:49.686 Atomic Boundary Size (Normal): 0 00:12:49.686 Atomic Boundary Size (PFail): 0 00:12:49.686 Atomic Boundary Offset: 0 00:12:49.686 Maximum Single Source Range Length: 65535 00:12:49.686 Maximum Copy Length: 65535 00:12:49.686 Maximum Source Range Count: 1 00:12:49.686 NGUID/EUI64 Never Reused: No 00:12:49.686 Namespace Write Protected: No 00:12:49.686 Number of LBA Formats: 1 00:12:49.686 Current LBA Format: LBA Format #00 00:12:49.686 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:49.686 00:12:49.686 21:28:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:49.686 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.686 [2024-07-15 21:28:39.478118] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:55.000 Initializing NVMe Controllers 00:12:55.000 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:55.000 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:55.000 Initialization complete. Launching workers. 00:12:55.000 ======================================================== 00:12:55.000 Latency(us) 00:12:55.000 Device Information : IOPS MiB/s Average min max 00:12:55.000 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39966.49 156.12 3202.55 839.17 6847.93 00:12:55.000 ======================================================== 00:12:55.000 Total : 39966.49 156.12 3202.55 839.17 6847.93 00:12:55.000 00:12:55.000 [2024-07-15 21:28:44.585306] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:55.000 21:28:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:55.000 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.000 [2024-07-15 21:28:44.768888] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:00.290 Initializing NVMe Controllers 00:13:00.290 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:00.290 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:00.290 Initialization complete. Launching workers. 00:13:00.290 ======================================================== 00:13:00.290 Latency(us) 00:13:00.290 Device Information : IOPS MiB/s Average min max 00:13:00.290 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35349.80 138.09 3621.55 1105.96 7369.37 00:13:00.291 ======================================================== 00:13:00.291 Total : 35349.80 138.09 3621.55 1105.96 7369.37 00:13:00.291 00:13:00.291 [2024-07-15 21:28:49.789291] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:00.291 21:28:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:00.291 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.291 [2024-07-15 21:28:49.969514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:05.581 [2024-07-15 21:28:55.109211] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:05.581 Initializing NVMe Controllers 00:13:05.581 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:05.581 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:05.581 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:05.581 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:05.581 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:05.581 Initialization complete. Launching workers. 00:13:05.581 Starting thread on core 2 00:13:05.581 Starting thread on core 3 00:13:05.581 Starting thread on core 1 00:13:05.581 21:28:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:05.581 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.581 [2024-07-15 21:28:55.363510] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:08.885 [2024-07-15 21:28:58.421608] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:08.885 Initializing NVMe Controllers 00:13:08.885 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:08.885 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:08.885 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:08.885 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:08.885 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:08.885 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:08.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:08.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:08.885 Initialization complete. Launching workers. 00:13:08.885 Starting thread on core 1 with urgent priority queue 00:13:08.885 Starting thread on core 2 with urgent priority queue 00:13:08.885 Starting thread on core 3 with urgent priority queue 00:13:08.885 Starting thread on core 0 with urgent priority queue 00:13:08.885 SPDK bdev Controller (SPDK2 ) core 0: 13292.33 IO/s 7.52 secs/100000 ios 00:13:08.885 SPDK bdev Controller (SPDK2 ) core 1: 13130.67 IO/s 7.62 secs/100000 ios 00:13:08.885 SPDK bdev Controller (SPDK2 ) core 2: 11122.00 IO/s 8.99 secs/100000 ios 00:13:08.885 SPDK bdev Controller (SPDK2 ) core 3: 8983.67 IO/s 11.13 secs/100000 ios 00:13:08.885 ======================================================== 00:13:08.885 00:13:08.885 21:28:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:08.885 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.885 [2024-07-15 21:28:58.681534] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:09.146 Initializing NVMe Controllers 00:13:09.146 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:09.146 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:09.146 Namespace ID: 1 size: 0GB 00:13:09.146 Initialization complete. 00:13:09.146 INFO: using host memory buffer for IO 00:13:09.146 Hello world! 00:13:09.146 [2024-07-15 21:28:58.694625] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:09.146 21:28:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:09.146 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.146 [2024-07-15 21:28:58.948240] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:10.533 Initializing NVMe Controllers 00:13:10.533 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:10.533 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:10.533 Initialization complete. Launching workers. 00:13:10.533 submit (in ns) avg, min, max = 8895.1, 3895.8, 7988629.2 00:13:10.533 complete (in ns) avg, min, max = 18066.5, 2377.5, 7988997.5 00:13:10.533 00:13:10.533 Submit histogram 00:13:10.533 ================ 00:13:10.533 Range in us Cumulative Count 00:13:10.533 3.893 - 3.920: 1.7080% ( 324) 00:13:10.533 3.920 - 3.947: 8.8407% ( 1353) 00:13:10.533 3.947 - 3.973: 18.0241% ( 1742) 00:13:10.533 3.973 - 4.000: 29.1686% ( 2114) 00:13:10.533 4.000 - 4.027: 39.3906% ( 1939) 00:13:10.533 4.027 - 4.053: 51.3838% ( 2275) 00:13:10.533 4.053 - 4.080: 67.1728% ( 2995) 00:13:10.533 4.080 - 4.107: 81.9759% ( 2808) 00:13:10.533 4.107 - 4.133: 91.9184% ( 1886) 00:13:10.533 4.133 - 4.160: 96.7948% ( 925) 00:13:10.533 4.160 - 4.187: 98.6293% ( 348) 00:13:10.533 4.187 - 4.213: 99.2461% ( 117) 00:13:10.533 4.213 - 4.240: 99.4254% ( 34) 00:13:10.533 4.240 - 4.267: 99.4728% ( 9) 00:13:10.533 4.267 - 4.293: 99.4834% ( 2) 00:13:10.533 4.293 - 4.320: 99.4886% ( 1) 00:13:10.533 4.320 - 4.347: 99.4992% ( 2) 00:13:10.533 4.480 - 4.507: 99.5045% ( 1) 00:13:10.533 4.560 - 4.587: 99.5097% ( 1) 00:13:10.533 4.613 - 4.640: 99.5150% ( 1) 00:13:10.533 4.720 - 4.747: 99.5203% ( 1) 00:13:10.533 4.773 - 4.800: 99.5255% ( 1) 00:13:10.533 4.800 - 4.827: 99.5308% ( 1) 00:13:10.533 4.960 - 4.987: 99.5361% ( 1) 00:13:10.533 5.067 - 5.093: 99.5414% ( 1) 00:13:10.533 5.387 - 5.413: 99.5466% ( 1) 00:13:10.533 5.440 - 5.467: 99.5519% ( 1) 00:13:10.533 5.573 - 5.600: 99.5624% ( 2) 00:13:10.533 5.600 - 5.627: 99.5677% ( 1) 00:13:10.533 5.680 - 5.707: 99.5730% ( 1) 00:13:10.533 5.733 - 5.760: 99.5835% ( 2) 00:13:10.533 5.760 - 5.787: 99.5888% ( 1) 00:13:10.533 5.787 - 5.813: 99.5993% ( 2) 00:13:10.533 5.867 - 5.893: 99.6046% ( 1) 00:13:10.534 5.893 - 5.920: 99.6099% ( 1) 00:13:10.534 5.947 - 5.973: 99.6152% ( 1) 00:13:10.534 6.000 - 6.027: 99.6257% ( 2) 00:13:10.534 6.027 - 6.053: 99.6310% ( 1) 00:13:10.534 6.053 - 6.080: 99.6362% ( 1) 00:13:10.534 6.107 - 6.133: 99.6521% ( 3) 00:13:10.534 6.133 - 6.160: 99.6626% ( 2) 00:13:10.534 6.160 - 6.187: 99.6679% ( 1) 00:13:10.534 6.187 - 6.213: 99.6732% ( 1) 00:13:10.534 6.240 - 6.267: 99.6784% ( 1) 00:13:10.534 6.267 - 6.293: 99.6837% ( 1) 00:13:10.534 6.293 - 6.320: 99.6942% ( 2) 00:13:10.534 6.320 - 6.347: 99.6995% ( 1) 00:13:10.534 6.347 - 6.373: 99.7048% ( 1) 00:13:10.534 6.373 - 6.400: 99.7101% ( 1) 00:13:10.534 6.427 - 6.453: 99.7206% ( 2) 00:13:10.534 6.453 - 6.480: 99.7311% ( 2) 00:13:10.534 6.507 - 6.533: 99.7470% ( 3) 00:13:10.534 6.533 - 6.560: 99.7522% ( 1) 00:13:10.534 6.560 - 6.587: 99.7628% ( 2) 00:13:10.534 6.587 - 6.613: 99.7733% ( 2) 00:13:10.534 6.613 - 6.640: 99.7839% ( 2) 00:13:10.534 6.693 - 6.720: 99.7891% ( 1) 00:13:10.534 6.747 - 6.773: 99.7944% ( 1) 00:13:10.534 6.827 - 6.880: 99.7997% ( 1) 00:13:10.534 6.880 - 6.933: 99.8049% ( 1) 00:13:10.534 6.933 - 6.987: 99.8102% ( 1) 00:13:10.534 7.147 - 7.200: 99.8155% ( 1) 00:13:10.534 7.200 - 7.253: 99.8208% ( 1) 00:13:10.534 7.253 - 7.307: 99.8260% ( 1) 00:13:10.534 7.307 - 7.360: 99.8418% ( 3) 00:13:10.534 7.520 - 7.573: 99.8471% ( 1) 00:13:10.534 7.573 - 7.627: 99.8524% ( 1) 00:13:10.534 7.733 - 7.787: 99.8577% ( 1) 00:13:10.534 8.053 - 8.107: 99.8629% ( 1) 00:13:10.534 8.320 - 8.373: 99.8682% ( 1) 00:13:10.534 8.533 - 8.587: 99.8735% ( 1) 00:13:10.534 11.947 - 12.000: 99.8787% ( 1) 00:13:10.534 14.187 - 14.293: 99.8840% ( 1) 00:13:10.534 3986.773 - 4014.080: 99.9947% ( 21) 00:13:10.534 7973.547 - 8028.160: 100.0000% ( 1) 00:13:10.534 00:13:10.534 Complete histogram 00:13:10.534 ================== 00:13:10.534 Range in us Cumulative Count 00:13:10.534 2.373 - 2.387: 0.0053% ( 1) 00:13:10.534 2.387 - 2.400: 1.0069% ( 190) 00:13:10.534 2.400 - [2024-07-15 21:29:00.053834] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:10.534 2.413: 1.1861% ( 34) 00:13:10.534 2.413 - 2.427: 1.3021% ( 22) 00:13:10.534 2.427 - 2.440: 1.3496% ( 9) 00:13:10.534 2.440 - 2.453: 38.9109% ( 7125) 00:13:10.534 2.453 - 2.467: 56.4447% ( 3326) 00:13:10.534 2.467 - 2.480: 68.5065% ( 2288) 00:13:10.534 2.480 - 2.493: 77.2576% ( 1660) 00:13:10.534 2.493 - 2.507: 81.1798% ( 744) 00:13:10.534 2.507 - 2.520: 84.0318% ( 541) 00:13:10.534 2.520 - 2.533: 89.0927% ( 960) 00:13:10.534 2.533 - 2.547: 94.3803% ( 1003) 00:13:10.534 2.547 - 2.560: 96.7631% ( 452) 00:13:10.534 2.560 - 2.573: 98.4185% ( 314) 00:13:10.534 2.573 - 2.587: 99.2356% ( 155) 00:13:10.534 2.587 - 2.600: 99.4201% ( 35) 00:13:10.534 2.600 - 2.613: 99.4412% ( 4) 00:13:10.534 2.667 - 2.680: 99.4465% ( 1) 00:13:10.534 2.680 - 2.693: 99.4517% ( 1) 00:13:10.534 4.240 - 4.267: 99.4570% ( 1) 00:13:10.534 4.320 - 4.347: 99.4623% ( 1) 00:13:10.534 4.347 - 4.373: 99.4676% ( 1) 00:13:10.534 4.427 - 4.453: 99.4728% ( 1) 00:13:10.534 4.453 - 4.480: 99.4781% ( 1) 00:13:10.534 4.587 - 4.613: 99.4834% ( 1) 00:13:10.534 4.613 - 4.640: 99.4886% ( 1) 00:13:10.534 4.640 - 4.667: 99.4992% ( 2) 00:13:10.534 4.693 - 4.720: 99.5045% ( 1) 00:13:10.534 4.747 - 4.773: 99.5097% ( 1) 00:13:10.534 4.773 - 4.800: 99.5150% ( 1) 00:13:10.534 4.800 - 4.827: 99.5203% ( 1) 00:13:10.534 4.853 - 4.880: 99.5255% ( 1) 00:13:10.534 4.880 - 4.907: 99.5361% ( 2) 00:13:10.534 4.987 - 5.013: 99.5414% ( 1) 00:13:10.534 5.013 - 5.040: 99.5466% ( 1) 00:13:10.534 5.253 - 5.280: 99.5572% ( 2) 00:13:10.534 5.360 - 5.387: 99.5624% ( 1) 00:13:10.534 5.467 - 5.493: 99.5677% ( 1) 00:13:10.534 5.493 - 5.520: 99.5730% ( 1) 00:13:10.534 5.573 - 5.600: 99.5783% ( 1) 00:13:10.534 5.760 - 5.787: 99.5835% ( 1) 00:13:10.534 5.920 - 5.947: 99.5941% ( 2) 00:13:10.534 6.693 - 6.720: 99.5993% ( 1) 00:13:10.534 6.773 - 6.800: 99.6046% ( 1) 00:13:10.534 13.120 - 13.173: 99.6099% ( 1) 00:13:10.534 88.320 - 88.747: 99.6152% ( 1) 00:13:10.534 3986.773 - 4014.080: 99.9947% ( 72) 00:13:10.534 7973.547 - 8028.160: 100.0000% ( 1) 00:13:10.534 00:13:10.534 21:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:10.534 21:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:10.534 21:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:10.534 21:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:10.534 21:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:10.534 [ 00:13:10.534 { 00:13:10.534 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:10.534 "subtype": "Discovery", 00:13:10.534 "listen_addresses": [], 00:13:10.534 "allow_any_host": true, 00:13:10.534 "hosts": [] 00:13:10.534 }, 00:13:10.534 { 00:13:10.534 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:10.534 "subtype": "NVMe", 00:13:10.534 "listen_addresses": [ 00:13:10.534 { 00:13:10.534 "trtype": "VFIOUSER", 00:13:10.534 "adrfam": "IPv4", 00:13:10.534 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:10.534 "trsvcid": "0" 00:13:10.534 } 00:13:10.534 ], 00:13:10.534 "allow_any_host": true, 00:13:10.534 "hosts": [], 00:13:10.534 "serial_number": "SPDK1", 00:13:10.534 "model_number": "SPDK bdev Controller", 00:13:10.534 "max_namespaces": 32, 00:13:10.534 "min_cntlid": 1, 00:13:10.534 "max_cntlid": 65519, 00:13:10.534 "namespaces": [ 00:13:10.534 { 00:13:10.534 "nsid": 1, 00:13:10.534 "bdev_name": "Malloc1", 00:13:10.534 "name": "Malloc1", 00:13:10.534 "nguid": "6A2323D5864A466BB464C539170A5D73", 00:13:10.534 "uuid": "6a2323d5-864a-466b-b464-c539170a5d73" 00:13:10.534 }, 00:13:10.534 { 00:13:10.534 "nsid": 2, 00:13:10.534 "bdev_name": "Malloc3", 00:13:10.534 "name": "Malloc3", 00:13:10.534 "nguid": "676B27BDAC214EFDAD76DF282C996B68", 00:13:10.534 "uuid": "676b27bd-ac21-4efd-ad76-df282c996b68" 00:13:10.534 } 00:13:10.534 ] 00:13:10.534 }, 00:13:10.534 { 00:13:10.534 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:10.534 "subtype": "NVMe", 00:13:10.534 "listen_addresses": [ 00:13:10.534 { 00:13:10.534 "trtype": "VFIOUSER", 00:13:10.534 "adrfam": "IPv4", 00:13:10.534 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:10.534 "trsvcid": "0" 00:13:10.534 } 00:13:10.534 ], 00:13:10.534 "allow_any_host": true, 00:13:10.534 "hosts": [], 00:13:10.534 "serial_number": "SPDK2", 00:13:10.534 "model_number": "SPDK bdev Controller", 00:13:10.534 "max_namespaces": 32, 00:13:10.534 "min_cntlid": 1, 00:13:10.534 "max_cntlid": 65519, 00:13:10.534 "namespaces": [ 00:13:10.534 { 00:13:10.534 "nsid": 1, 00:13:10.534 "bdev_name": "Malloc2", 00:13:10.534 "name": "Malloc2", 00:13:10.534 "nguid": "8DDDBB90F6384C65B6644D560B76B7A3", 00:13:10.534 "uuid": "8dddbb90-f638-4c65-b664-4d560b76b7a3" 00:13:10.534 } 00:13:10.534 ] 00:13:10.534 } 00:13:10.534 ] 00:13:10.534 21:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:10.534 21:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2091589 00:13:10.534 21:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:10.534 21:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:10.534 21:29:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:10.534 21:29:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:10.534 21:29:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:10.534 21:29:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:10.534 21:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:10.534 21:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:10.534 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.796 Malloc4 00:13:10.796 21:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:10.796 [2024-07-15 21:29:00.446631] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:10.796 [2024-07-15 21:29:00.593552] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:11.057 21:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:11.057 Asynchronous Event Request test 00:13:11.057 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.057 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.057 Registering asynchronous event callbacks... 00:13:11.057 Starting namespace attribute notice tests for all controllers... 00:13:11.057 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:11.057 aer_cb - Changed Namespace 00:13:11.057 Cleaning up... 00:13:11.057 [ 00:13:11.057 { 00:13:11.057 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:11.057 "subtype": "Discovery", 00:13:11.057 "listen_addresses": [], 00:13:11.057 "allow_any_host": true, 00:13:11.057 "hosts": [] 00:13:11.057 }, 00:13:11.057 { 00:13:11.057 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:11.057 "subtype": "NVMe", 00:13:11.057 "listen_addresses": [ 00:13:11.057 { 00:13:11.057 "trtype": "VFIOUSER", 00:13:11.057 "adrfam": "IPv4", 00:13:11.057 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:11.057 "trsvcid": "0" 00:13:11.057 } 00:13:11.057 ], 00:13:11.057 "allow_any_host": true, 00:13:11.057 "hosts": [], 00:13:11.057 "serial_number": "SPDK1", 00:13:11.057 "model_number": "SPDK bdev Controller", 00:13:11.057 "max_namespaces": 32, 00:13:11.057 "min_cntlid": 1, 00:13:11.057 "max_cntlid": 65519, 00:13:11.057 "namespaces": [ 00:13:11.057 { 00:13:11.057 "nsid": 1, 00:13:11.057 "bdev_name": "Malloc1", 00:13:11.057 "name": "Malloc1", 00:13:11.057 "nguid": "6A2323D5864A466BB464C539170A5D73", 00:13:11.057 "uuid": "6a2323d5-864a-466b-b464-c539170a5d73" 00:13:11.057 }, 00:13:11.057 { 00:13:11.057 "nsid": 2, 00:13:11.057 "bdev_name": "Malloc3", 00:13:11.057 "name": "Malloc3", 00:13:11.057 "nguid": "676B27BDAC214EFDAD76DF282C996B68", 00:13:11.057 "uuid": "676b27bd-ac21-4efd-ad76-df282c996b68" 00:13:11.057 } 00:13:11.057 ] 00:13:11.057 }, 00:13:11.057 { 00:13:11.057 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:11.057 "subtype": "NVMe", 00:13:11.057 "listen_addresses": [ 00:13:11.057 { 00:13:11.057 "trtype": "VFIOUSER", 00:13:11.057 "adrfam": "IPv4", 00:13:11.057 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:11.057 "trsvcid": "0" 00:13:11.057 } 00:13:11.057 ], 00:13:11.057 "allow_any_host": true, 00:13:11.057 "hosts": [], 00:13:11.057 "serial_number": "SPDK2", 00:13:11.057 "model_number": "SPDK bdev Controller", 00:13:11.057 "max_namespaces": 32, 00:13:11.057 "min_cntlid": 1, 00:13:11.057 "max_cntlid": 65519, 00:13:11.057 "namespaces": [ 00:13:11.057 { 00:13:11.057 "nsid": 1, 00:13:11.058 "bdev_name": "Malloc2", 00:13:11.058 "name": "Malloc2", 00:13:11.058 "nguid": "8DDDBB90F6384C65B6644D560B76B7A3", 00:13:11.058 "uuid": "8dddbb90-f638-4c65-b664-4d560b76b7a3" 00:13:11.058 }, 00:13:11.058 { 00:13:11.058 "nsid": 2, 00:13:11.058 "bdev_name": "Malloc4", 00:13:11.058 "name": "Malloc4", 00:13:11.058 "nguid": "32B6659745194E949D1AA67D8F6F1F55", 00:13:11.058 "uuid": "32b66597-4519-4e94-9d1a-a67d8f6f1f55" 00:13:11.058 } 00:13:11.058 ] 00:13:11.058 } 00:13:11.058 ] 00:13:11.058 21:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2091589 00:13:11.058 21:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:11.058 21:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2082497 00:13:11.058 21:29:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2082497 ']' 00:13:11.058 21:29:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2082497 00:13:11.058 21:29:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:11.058 21:29:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:11.058 21:29:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2082497 00:13:11.058 21:29:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:11.058 21:29:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:11.058 21:29:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2082497' 00:13:11.058 killing process with pid 2082497 00:13:11.058 21:29:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2082497 00:13:11.058 21:29:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2082497 00:13:11.320 21:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:11.320 21:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:11.320 21:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:11.320 21:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:11.320 21:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:11.320 21:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2091905 00:13:11.320 21:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2091905' 00:13:11.320 Process pid: 2091905 00:13:11.320 21:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:11.320 21:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:11.320 21:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2091905 00:13:11.320 21:29:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2091905 ']' 00:13:11.320 21:29:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.320 21:29:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:11.320 21:29:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.320 21:29:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:11.320 21:29:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:11.320 [2024-07-15 21:29:01.064994] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:11.320 [2024-07-15 21:29:01.065897] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:13:11.320 [2024-07-15 21:29:01.065943] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.320 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.581 [2024-07-15 21:29:01.125008] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.581 [2024-07-15 21:29:01.190926] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.581 [2024-07-15 21:29:01.190963] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.581 [2024-07-15 21:29:01.190971] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.581 [2024-07-15 21:29:01.190977] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.581 [2024-07-15 21:29:01.190983] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.581 [2024-07-15 21:29:01.191139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.581 [2024-07-15 21:29:01.191267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.581 [2024-07-15 21:29:01.191500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.581 [2024-07-15 21:29:01.191501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.581 [2024-07-15 21:29:01.254405] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:11.581 [2024-07-15 21:29:01.254413] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:11.581 [2024-07-15 21:29:01.255411] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:11.581 [2024-07-15 21:29:01.255798] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:11.581 [2024-07-15 21:29:01.255910] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:12.153 21:29:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.153 21:29:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:12.153 21:29:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:13.100 21:29:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:13.392 21:29:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:13.392 21:29:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:13.392 21:29:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:13.392 21:29:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:13.392 21:29:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:13.392 Malloc1 00:13:13.651 21:29:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:13.651 21:29:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:13.911 21:29:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:13.911 21:29:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:13.911 21:29:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:13.911 21:29:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:14.171 Malloc2 00:13:14.171 21:29:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:14.432 21:29:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:14.432 21:29:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:14.693 21:29:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:14.693 21:29:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2091905 00:13:14.693 21:29:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2091905 ']' 00:13:14.693 21:29:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2091905 00:13:14.693 21:29:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:14.693 21:29:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:14.693 21:29:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2091905 00:13:14.693 21:29:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:14.693 21:29:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:14.693 21:29:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2091905' 00:13:14.693 killing process with pid 2091905 00:13:14.693 21:29:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2091905 00:13:14.693 21:29:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2091905 00:13:14.955 21:29:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:14.955 21:29:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:14.955 00:13:14.955 real 0m50.491s 00:13:14.955 user 3m20.250s 00:13:14.955 sys 0m2.925s 00:13:14.955 21:29:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:14.955 21:29:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:14.955 ************************************ 00:13:14.955 END TEST nvmf_vfio_user 00:13:14.955 ************************************ 00:13:14.955 21:29:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:14.955 21:29:04 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:14.955 21:29:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:14.955 21:29:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.955 21:29:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:14.955 ************************************ 00:13:14.955 START TEST nvmf_vfio_user_nvme_compliance 00:13:14.955 ************************************ 00:13:14.955 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:15.217 * Looking for test storage... 00:13:15.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2092652 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2092652' 00:13:15.217 Process pid: 2092652 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2092652 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2092652 ']' 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.217 21:29:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:15.217 [2024-07-15 21:29:04.873374] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:13:15.217 [2024-07-15 21:29:04.873442] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.217 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.217 [2024-07-15 21:29:04.937649] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:15.217 [2024-07-15 21:29:05.011583] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.217 [2024-07-15 21:29:05.011619] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.217 [2024-07-15 21:29:05.011627] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.217 [2024-07-15 21:29:05.011633] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.217 [2024-07-15 21:29:05.011642] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.217 [2024-07-15 21:29:05.011788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.217 [2024-07-15 21:29:05.011903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.217 [2024-07-15 21:29:05.011905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.159 21:29:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.159 21:29:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:16.159 21:29:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:17.102 malloc0 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.102 21:29:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:17.102 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.102 00:13:17.102 00:13:17.102 CUnit - A unit testing framework for C - Version 2.1-3 00:13:17.102 http://cunit.sourceforge.net/ 00:13:17.102 00:13:17.102 00:13:17.102 Suite: nvme_compliance 00:13:17.363 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 21:29:06.916790] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.363 [2024-07-15 21:29:06.918135] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:17.363 [2024-07-15 21:29:06.918147] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:17.363 [2024-07-15 21:29:06.918153] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:17.363 [2024-07-15 21:29:06.919811] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.363 passed 00:13:17.363 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 21:29:07.015388] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.363 [2024-07-15 21:29:07.018402] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.363 passed 00:13:17.363 Test: admin_identify_ns ...[2024-07-15 21:29:07.113371] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.623 [2024-07-15 21:29:07.177142] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:17.623 [2024-07-15 21:29:07.185142] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:17.623 [2024-07-15 21:29:07.206250] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.623 passed 00:13:17.623 Test: admin_get_features_mandatory_features ...[2024-07-15 21:29:07.296866] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.623 [2024-07-15 21:29:07.300892] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.623 passed 00:13:17.623 Test: admin_get_features_optional_features ...[2024-07-15 21:29:07.393443] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.623 [2024-07-15 21:29:07.396463] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.884 passed 00:13:17.884 Test: admin_set_features_number_of_queues ...[2024-07-15 21:29:07.491462] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.884 [2024-07-15 21:29:07.596237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.884 passed 00:13:18.145 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 21:29:07.689871] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.145 [2024-07-15 21:29:07.692887] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.145 passed 00:13:18.145 Test: admin_get_log_page_with_lpo ...[2024-07-15 21:29:07.786010] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.145 [2024-07-15 21:29:07.853133] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:18.145 [2024-07-15 21:29:07.866197] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.145 passed 00:13:18.406 Test: fabric_property_get ...[2024-07-15 21:29:07.958230] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.406 [2024-07-15 21:29:07.959477] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:18.406 [2024-07-15 21:29:07.961248] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.406 passed 00:13:18.406 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 21:29:08.056802] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.406 [2024-07-15 21:29:08.058070] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:18.406 [2024-07-15 21:29:08.059827] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.406 passed 00:13:18.406 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 21:29:08.153967] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.667 [2024-07-15 21:29:08.237138] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:18.667 [2024-07-15 21:29:08.253136] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:18.667 [2024-07-15 21:29:08.258213] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.668 passed 00:13:18.668 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 21:29:08.349805] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.668 [2024-07-15 21:29:08.351047] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:18.668 [2024-07-15 21:29:08.352826] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.668 passed 00:13:18.668 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 21:29:08.443952] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.927 [2024-07-15 21:29:08.523128] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:18.927 [2024-07-15 21:29:08.547129] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:18.927 [2024-07-15 21:29:08.552206] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.927 passed 00:13:18.927 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 21:29:08.641786] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.927 [2024-07-15 21:29:08.643032] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:18.927 [2024-07-15 21:29:08.643054] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:18.927 [2024-07-15 21:29:08.644801] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.927 passed 00:13:19.188 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 21:29:08.737914] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.188 [2024-07-15 21:29:08.829129] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:19.188 [2024-07-15 21:29:08.837131] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:19.188 [2024-07-15 21:29:08.845132] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:19.188 [2024-07-15 21:29:08.853133] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:19.188 [2024-07-15 21:29:08.881211] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:19.188 passed 00:13:19.188 Test: admin_create_io_sq_verify_pc ...[2024-07-15 21:29:08.972806] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.188 [2024-07-15 21:29:08.989137] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:19.448 [2024-07-15 21:29:09.006946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:19.448 passed 00:13:19.448 Test: admin_create_io_qp_max_qps ...[2024-07-15 21:29:09.102499] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.830 [2024-07-15 21:29:10.215134] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:20.830 [2024-07-15 21:29:10.614448] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.091 passed 00:13:21.091 Test: admin_create_io_sq_shared_cq ...[2024-07-15 21:29:10.707650] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.091 [2024-07-15 21:29:10.839129] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:21.091 [2024-07-15 21:29:10.876182] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.352 passed 00:13:21.352 00:13:21.352 Run Summary: Type Total Ran Passed Failed Inactive 00:13:21.352 suites 1 1 n/a 0 0 00:13:21.352 tests 18 18 18 0 0 00:13:21.352 asserts 360 360 360 0 n/a 00:13:21.352 00:13:21.352 Elapsed time = 1.662 seconds 00:13:21.352 21:29:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2092652 00:13:21.352 21:29:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2092652 ']' 00:13:21.352 21:29:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2092652 00:13:21.352 21:29:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:21.352 21:29:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:21.352 21:29:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2092652 00:13:21.352 21:29:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:21.352 21:29:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:21.352 21:29:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2092652' 00:13:21.352 killing process with pid 2092652 00:13:21.352 21:29:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2092652 00:13:21.352 21:29:10 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2092652 00:13:21.352 21:29:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:21.352 21:29:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:21.352 00:13:21.352 real 0m6.437s 00:13:21.352 user 0m18.459s 00:13:21.352 sys 0m0.438s 00:13:21.352 21:29:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:21.352 21:29:11 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:21.352 ************************************ 00:13:21.352 END TEST nvmf_vfio_user_nvme_compliance 00:13:21.352 ************************************ 00:13:21.613 21:29:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:21.613 21:29:11 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:21.613 21:29:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:21.613 21:29:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:21.613 21:29:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:21.613 ************************************ 00:13:21.613 START TEST nvmf_vfio_user_fuzz 00:13:21.613 ************************************ 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:21.613 * Looking for test storage... 00:13:21.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2094049 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2094049' 00:13:21.613 Process pid: 2094049 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2094049 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2094049 ']' 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.613 21:29:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:22.554 21:29:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.554 21:29:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:22.554 21:29:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:23.495 malloc0 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:23.495 21:29:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:55.667 Fuzzing completed. Shutting down the fuzz application 00:13:55.667 00:13:55.667 Dumping successful admin opcodes: 00:13:55.667 8, 9, 10, 24, 00:13:55.667 Dumping successful io opcodes: 00:13:55.667 0, 00:13:55.667 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1222759, total successful commands: 4793, random_seed: 1993090560 00:13:55.667 NS: 0x200003a1ef00 admin qp, Total commands completed: 153680, total successful commands: 1239, random_seed: 3674886272 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2094049 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2094049 ']' 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2094049 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2094049 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2094049' 00:13:55.667 killing process with pid 2094049 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2094049 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2094049 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:55.667 00:13:55.667 real 0m33.686s 00:13:55.667 user 0m40.651s 00:13:55.667 sys 0m23.023s 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:55.667 21:29:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:55.667 ************************************ 00:13:55.667 END TEST nvmf_vfio_user_fuzz 00:13:55.667 ************************************ 00:13:55.667 21:29:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:55.667 21:29:44 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:55.667 21:29:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:55.667 21:29:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.667 21:29:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:55.667 ************************************ 00:13:55.667 START TEST nvmf_host_management 00:13:55.667 ************************************ 00:13:55.667 21:29:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:55.667 * Looking for test storage... 00:13:55.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.667 21:29:45 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:55.668 21:29:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:02.275 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.275 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:02.276 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:02.276 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:02.276 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.276 21:29:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.276 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.276 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:14:02.536 00:14:02.536 --- 10.0.0.2 ping statistics --- 00:14:02.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.536 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:14:02.536 00:14:02.536 --- 10.0.0.1 ping statistics --- 00:14:02.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.536 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2104234 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2104234 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2104234 ']' 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.536 21:29:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:02.536 [2024-07-15 21:29:52.214854] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:14:02.536 [2024-07-15 21:29:52.214917] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.536 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.536 [2024-07-15 21:29:52.303761] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.796 [2024-07-15 21:29:52.399548] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.796 [2024-07-15 21:29:52.399607] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.796 [2024-07-15 21:29:52.399616] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.796 [2024-07-15 21:29:52.399622] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.796 [2024-07-15 21:29:52.399628] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.796 [2024-07-15 21:29:52.399834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.796 [2024-07-15 21:29:52.399975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.796 [2024-07-15 21:29:52.400160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.796 [2024-07-15 21:29:52.400159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:03.367 21:29:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.367 21:29:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:03.367 21:29:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:03.367 21:29:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:03.367 21:29:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.367 [2024-07-15 21:29:53.044629] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.367 Malloc0 00:14:03.367 [2024-07-15 21:29:53.104046] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2104423 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2104423 /var/tmp/bdevperf.sock 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2104423 ']' 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:03.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:03.367 { 00:14:03.367 "params": { 00:14:03.367 "name": "Nvme$subsystem", 00:14:03.367 "trtype": "$TEST_TRANSPORT", 00:14:03.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:03.367 "adrfam": "ipv4", 00:14:03.367 "trsvcid": "$NVMF_PORT", 00:14:03.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:03.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:03.367 "hdgst": ${hdgst:-false}, 00:14:03.367 "ddgst": ${ddgst:-false} 00:14:03.367 }, 00:14:03.367 "method": "bdev_nvme_attach_controller" 00:14:03.367 } 00:14:03.367 EOF 00:14:03.367 )") 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:03.367 21:29:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:03.627 21:29:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:03.627 "params": { 00:14:03.627 "name": "Nvme0", 00:14:03.627 "trtype": "tcp", 00:14:03.627 "traddr": "10.0.0.2", 00:14:03.627 "adrfam": "ipv4", 00:14:03.627 "trsvcid": "4420", 00:14:03.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:03.627 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:03.627 "hdgst": false, 00:14:03.627 "ddgst": false 00:14:03.627 }, 00:14:03.627 "method": "bdev_nvme_attach_controller" 00:14:03.627 }' 00:14:03.627 [2024-07-15 21:29:53.204554] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:14:03.627 [2024-07-15 21:29:53.204616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104423 ] 00:14:03.627 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.627 [2024-07-15 21:29:53.270849] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.627 [2024-07-15 21:29:53.335154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.887 Running I/O for 10 seconds... 00:14:04.460 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:04.460 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:04.460 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:04.460 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.460 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:04.460 21:29:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.460 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:04.460 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:04.460 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:04.460 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:04.460 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:04.460 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:04.460 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:04.460 21:29:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:04.460 21:29:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:04.460 21:29:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:04.460 21:29:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.460 21:29:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:04.460 21:29:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.460 21:29:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=513 00:14:04.460 21:29:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 513 -ge 100 ']' 00:14:04.460 21:29:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:04.460 21:29:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:04.460 21:29:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:04.460 21:29:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:04.460 21:29:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.460 21:29:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:04.460 [2024-07-15 21:29:54.055184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.460 [2024-07-15 21:29:54.055378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055384] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055403] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055436] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055442] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055461] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055556] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055568] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055587] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055613] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.055625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fc430 is same with the state(5) to be set 00:14:04.461 [2024-07-15 21:29:54.056075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.461 [2024-07-15 21:29:54.056546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.461 [2024-07-15 21:29:54.056557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.056990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.056997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.057007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.057014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.057024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.057031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.057041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.057048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.057057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.057065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.057075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.057082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.462 [2024-07-15 21:29:54.057096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.462 [2024-07-15 21:29:54.057104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.463 [2024-07-15 21:29:54.057113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.463 [2024-07-15 21:29:54.057125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.463 [2024-07-15 21:29:54.057135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.463 [2024-07-15 21:29:54.057143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.463 [2024-07-15 21:29:54.057153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.463 [2024-07-15 21:29:54.057162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.463 [2024-07-15 21:29:54.057171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.463 [2024-07-15 21:29:54.057179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.463 [2024-07-15 21:29:54.057189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.463 [2024-07-15 21:29:54.057197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.463 [2024-07-15 21:29:54.057213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.463 [2024-07-15 21:29:54.057222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.463 [2024-07-15 21:29:54.057233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.463 [2024-07-15 21:29:54.057244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.463 [2024-07-15 21:29:54.057255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.463 [2024-07-15 21:29:54.057265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.463 [2024-07-15 21:29:54.057277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.463 [2024-07-15 21:29:54.057286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.463 [2024-07-15 21:29:54.057296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf99d0 is same with the state(5) to be set 00:14:04.463 [2024-07-15 21:29:54.057339] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcf99d0 was disconnected and freed. reset controller. 00:14:04.463 [2024-07-15 21:29:54.058562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:04.463 21:29:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.463 21:29:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:04.463 task offset: 65536 on job bdev=Nvme0n1 fails 00:14:04.463 00:14:04.463 Latency(us) 00:14:04.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.463 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:04.463 Job: Nvme0n1 ended in about 0.54 seconds with error 00:14:04.463 Verification LBA range: start 0x0 length 0x400 00:14:04.463 Nvme0n1 : 0.54 952.93 59.56 119.12 0.00 58339.41 11632.64 50025.81 00:14:04.463 =================================================================================================================== 00:14:04.463 Total : 952.93 59.56 119.12 0.00 58339.41 11632.64 50025.81 00:14:04.463 [2024-07-15 21:29:54.060565] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:04.463 [2024-07-15 21:29:54.060589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e8510 (9): Bad file descriptor 00:14:04.463 21:29:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.463 21:29:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:04.463 21:29:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.463 21:29:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:04.463 [2024-07-15 21:29:54.113380] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:05.407 21:29:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2104423 00:14:05.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2104423) - No such process 00:14:05.407 21:29:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:05.407 21:29:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:05.407 21:29:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:05.407 21:29:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:05.407 21:29:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:05.407 21:29:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:05.407 21:29:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:05.407 21:29:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:05.407 { 00:14:05.407 "params": { 00:14:05.407 "name": "Nvme$subsystem", 00:14:05.407 "trtype": "$TEST_TRANSPORT", 00:14:05.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:05.407 "adrfam": "ipv4", 00:14:05.407 "trsvcid": "$NVMF_PORT", 00:14:05.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:05.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:05.407 "hdgst": ${hdgst:-false}, 00:14:05.407 "ddgst": ${ddgst:-false} 00:14:05.407 }, 00:14:05.407 "method": "bdev_nvme_attach_controller" 00:14:05.407 } 00:14:05.407 EOF 00:14:05.407 )") 00:14:05.407 21:29:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:05.407 21:29:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:05.407 21:29:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:05.407 21:29:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:05.407 "params": { 00:14:05.407 "name": "Nvme0", 00:14:05.407 "trtype": "tcp", 00:14:05.407 "traddr": "10.0.0.2", 00:14:05.407 "adrfam": "ipv4", 00:14:05.407 "trsvcid": "4420", 00:14:05.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:05.407 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:05.407 "hdgst": false, 00:14:05.407 "ddgst": false 00:14:05.407 }, 00:14:05.407 "method": "bdev_nvme_attach_controller" 00:14:05.407 }' 00:14:05.407 [2024-07-15 21:29:55.132277] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:14:05.407 [2024-07-15 21:29:55.132354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104777 ] 00:14:05.407 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.407 [2024-07-15 21:29:55.193020] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.668 [2024-07-15 21:29:55.256755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.668 Running I/O for 1 seconds... 00:14:07.054 00:14:07.054 Latency(us) 00:14:07.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.054 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:07.054 Verification LBA range: start 0x0 length 0x400 00:14:07.054 Nvme0n1 : 1.02 1066.03 66.63 0.00 0.00 59159.87 13107.20 47841.28 00:14:07.054 =================================================================================================================== 00:14:07.054 Total : 1066.03 66.63 0.00 0.00 59159.87 13107.20 47841.28 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:07.054 rmmod nvme_tcp 00:14:07.054 rmmod nvme_fabrics 00:14:07.054 rmmod nvme_keyring 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2104234 ']' 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2104234 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2104234 ']' 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2104234 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2104234 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2104234' 00:14:07.054 killing process with pid 2104234 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2104234 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2104234 00:14:07.054 [2024-07-15 21:29:56.806928] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.054 21:29:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.601 21:29:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:09.601 21:29:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:09.601 00:14:09.601 real 0m13.937s 00:14:09.601 user 0m22.154s 00:14:09.601 sys 0m6.229s 00:14:09.601 21:29:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:09.601 21:29:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:09.601 ************************************ 00:14:09.601 END TEST nvmf_host_management 00:14:09.601 ************************************ 00:14:09.601 21:29:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:09.601 21:29:58 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:09.601 21:29:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:09.601 21:29:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:09.601 21:29:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:09.601 ************************************ 00:14:09.601 START TEST nvmf_lvol 00:14:09.601 ************************************ 00:14:09.601 21:29:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:09.601 * Looking for test storage... 00:14:09.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.601 21:29:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:09.602 21:29:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:17.744 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:17.744 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:17.744 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:17.744 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:17.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:14:17.744 00:14:17.744 --- 10.0.0.2 ping statistics --- 00:14:17.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.744 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.418 ms 00:14:17.744 00:14:17.744 --- 10.0.0.1 ping statistics --- 00:14:17.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.744 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:17.744 21:30:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:17.745 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:17.745 21:30:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:17.745 21:30:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:17.745 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2109537 00:14:17.745 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2109537 00:14:17.745 21:30:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:17.745 21:30:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2109537 ']' 00:14:17.745 21:30:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.745 21:30:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.745 21:30:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.745 21:30:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.745 21:30:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:17.745 [2024-07-15 21:30:06.493593] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:14:17.745 [2024-07-15 21:30:06.493660] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.745 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.745 [2024-07-15 21:30:06.567398] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:17.745 [2024-07-15 21:30:06.640762] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.745 [2024-07-15 21:30:06.640803] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.745 [2024-07-15 21:30:06.640811] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.745 [2024-07-15 21:30:06.640818] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.745 [2024-07-15 21:30:06.640823] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.745 [2024-07-15 21:30:06.641001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.745 [2024-07-15 21:30:06.641118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.745 [2024-07-15 21:30:06.641121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.745 21:30:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.745 21:30:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:17.745 21:30:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.745 21:30:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:17.745 21:30:07 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:17.745 21:30:07 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.745 21:30:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:17.745 [2024-07-15 21:30:07.458236] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.745 21:30:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:18.006 21:30:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:18.006 21:30:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:18.266 21:30:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:18.267 21:30:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:18.267 21:30:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:18.528 21:30:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d3f7a827-5d66-4611-bee7-acd819456042 00:14:18.528 21:30:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d3f7a827-5d66-4611-bee7-acd819456042 lvol 20 00:14:18.788 21:30:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c77f243d-d6fe-4be0-8164-1a58b0ae0f5e 00:14:18.788 21:30:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:18.788 21:30:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c77f243d-d6fe-4be0-8164-1a58b0ae0f5e 00:14:19.049 21:30:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:19.049 [2024-07-15 21:30:08.822985] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.049 21:30:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:19.310 21:30:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2109917 00:14:19.310 21:30:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:19.310 21:30:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:19.310 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.254 21:30:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c77f243d-d6fe-4be0-8164-1a58b0ae0f5e MY_SNAPSHOT 00:14:20.519 21:30:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=28759ea1-a801-41e7-8578-6147c1044082 00:14:20.519 21:30:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c77f243d-d6fe-4be0-8164-1a58b0ae0f5e 30 00:14:20.780 21:30:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 28759ea1-a801-41e7-8578-6147c1044082 MY_CLONE 00:14:21.040 21:30:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=fea9324b-5bd4-4d74-bc7c-b99a77da0aa6 00:14:21.040 21:30:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate fea9324b-5bd4-4d74-bc7c-b99a77da0aa6 00:14:21.301 21:30:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2109917 00:14:31.295 Initializing NVMe Controllers 00:14:31.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:31.295 Controller IO queue size 128, less than required. 00:14:31.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:31.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:31.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:31.295 Initialization complete. Launching workers. 00:14:31.295 ======================================================== 00:14:31.295 Latency(us) 00:14:31.295 Device Information : IOPS MiB/s Average min max 00:14:31.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12282.30 47.98 10424.77 1671.87 60996.56 00:14:31.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17810.30 69.57 7188.60 770.57 46582.16 00:14:31.295 ======================================================== 00:14:31.295 Total : 30092.59 117.55 8509.44 770.57 60996.56 00:14:31.295 00:14:31.295 21:30:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:31.295 21:30:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c77f243d-d6fe-4be0-8164-1a58b0ae0f5e 00:14:31.295 21:30:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d3f7a827-5d66-4611-bee7-acd819456042 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:31.296 rmmod nvme_tcp 00:14:31.296 rmmod nvme_fabrics 00:14:31.296 rmmod nvme_keyring 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2109537 ']' 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2109537 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2109537 ']' 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2109537 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2109537 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2109537' 00:14:31.296 killing process with pid 2109537 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2109537 00:14:31.296 21:30:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2109537 00:14:31.296 21:30:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.296 21:30:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:31.296 21:30:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:31.296 21:30:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.296 21:30:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.296 21:30:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.296 21:30:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.296 21:30:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.681 21:30:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:32.681 00:14:32.681 real 0m23.246s 00:14:32.681 user 1m3.494s 00:14:32.681 sys 0m7.937s 00:14:32.681 21:30:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:32.681 21:30:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:32.681 ************************************ 00:14:32.681 END TEST nvmf_lvol 00:14:32.681 ************************************ 00:14:32.681 21:30:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:32.681 21:30:22 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:32.681 21:30:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:32.681 21:30:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.681 21:30:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:32.681 ************************************ 00:14:32.681 START TEST nvmf_lvs_grow 00:14:32.681 ************************************ 00:14:32.681 21:30:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:32.681 * Looking for test storage... 00:14:32.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:32.681 21:30:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:32.681 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:32.682 21:30:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:40.825 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:40.825 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.825 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:40.826 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:40.826 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:40.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:14:40.826 00:14:40.826 --- 10.0.0.2 ping statistics --- 00:14:40.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.826 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:40.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:14:40.826 00:14:40.826 --- 10.0.0.1 ping statistics --- 00:14:40.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.826 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2116711 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2116711 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2116711 ']' 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:40.826 21:30:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:40.826 [2024-07-15 21:30:29.513234] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:14:40.826 [2024-07-15 21:30:29.513298] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.826 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.826 [2024-07-15 21:30:29.583070] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.826 [2024-07-15 21:30:29.656060] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.826 [2024-07-15 21:30:29.656098] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.826 [2024-07-15 21:30:29.656107] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.826 [2024-07-15 21:30:29.656113] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.826 [2024-07-15 21:30:29.656119] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.826 [2024-07-15 21:30:29.656150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.826 21:30:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.826 21:30:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:40.826 21:30:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.826 21:30:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:40.826 21:30:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:40.826 21:30:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.826 21:30:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:40.826 [2024-07-15 21:30:30.459453] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.826 21:30:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:40.826 21:30:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:40.826 21:30:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.826 21:30:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:40.826 ************************************ 00:14:40.827 START TEST lvs_grow_clean 00:14:40.827 ************************************ 00:14:40.827 21:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:40.827 21:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:40.827 21:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:40.827 21:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:40.827 21:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:40.827 21:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:40.827 21:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:40.827 21:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:40.827 21:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:40.827 21:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:41.087 21:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:41.087 21:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:41.087 21:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1149027a-4063-4ba9-b291-dce57517e05c 00:14:41.087 21:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1149027a-4063-4ba9-b291-dce57517e05c 00:14:41.087 21:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:41.347 21:30:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:41.347 21:30:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:41.347 21:30:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1149027a-4063-4ba9-b291-dce57517e05c lvol 150 00:14:41.606 21:30:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=82f9d5c0-787d-41cf-8ec4-0593ec13e1b1 00:14:41.606 21:30:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:41.607 21:30:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:41.607 [2024-07-15 21:30:31.318646] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:41.607 [2024-07-15 21:30:31.318695] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:41.607 true 00:14:41.607 21:30:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1149027a-4063-4ba9-b291-dce57517e05c 00:14:41.607 21:30:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:41.866 21:30:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:41.866 21:30:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:41.866 21:30:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 82f9d5c0-787d-41cf-8ec4-0593ec13e1b1 00:14:42.125 21:30:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:42.125 [2024-07-15 21:30:31.908453] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.125 21:30:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:42.384 21:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2117237 00:14:42.384 21:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:42.384 21:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:42.384 21:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2117237 /var/tmp/bdevperf.sock 00:14:42.384 21:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2117237 ']' 00:14:42.384 21:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.384 21:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:42.384 21:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.384 21:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:42.384 21:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:42.384 [2024-07-15 21:30:32.123792] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:14:42.384 [2024-07-15 21:30:32.123844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117237 ] 00:14:42.384 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.644 [2024-07-15 21:30:32.198321] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.644 [2024-07-15 21:30:32.266849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.214 21:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.214 21:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:43.214 21:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:43.475 Nvme0n1 00:14:43.475 21:30:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:43.736 [ 00:14:43.736 { 00:14:43.736 "name": "Nvme0n1", 00:14:43.736 "aliases": [ 00:14:43.736 "82f9d5c0-787d-41cf-8ec4-0593ec13e1b1" 00:14:43.736 ], 00:14:43.736 "product_name": "NVMe disk", 00:14:43.736 "block_size": 4096, 00:14:43.736 "num_blocks": 38912, 00:14:43.736 "uuid": "82f9d5c0-787d-41cf-8ec4-0593ec13e1b1", 00:14:43.736 "assigned_rate_limits": { 00:14:43.736 "rw_ios_per_sec": 0, 00:14:43.736 "rw_mbytes_per_sec": 0, 00:14:43.736 "r_mbytes_per_sec": 0, 00:14:43.736 "w_mbytes_per_sec": 0 00:14:43.736 }, 00:14:43.736 "claimed": false, 00:14:43.736 "zoned": false, 00:14:43.736 "supported_io_types": { 00:14:43.736 "read": true, 00:14:43.736 "write": true, 00:14:43.736 "unmap": true, 00:14:43.736 "flush": true, 00:14:43.736 "reset": true, 00:14:43.736 "nvme_admin": true, 00:14:43.736 "nvme_io": true, 00:14:43.736 "nvme_io_md": false, 00:14:43.736 "write_zeroes": true, 00:14:43.736 "zcopy": false, 00:14:43.736 "get_zone_info": false, 00:14:43.736 "zone_management": false, 00:14:43.736 "zone_append": false, 00:14:43.736 "compare": true, 00:14:43.736 "compare_and_write": true, 00:14:43.736 "abort": true, 00:14:43.736 "seek_hole": false, 00:14:43.736 "seek_data": false, 00:14:43.736 "copy": true, 00:14:43.736 "nvme_iov_md": false 00:14:43.736 }, 00:14:43.736 "memory_domains": [ 00:14:43.736 { 00:14:43.736 "dma_device_id": "system", 00:14:43.736 "dma_device_type": 1 00:14:43.736 } 00:14:43.736 ], 00:14:43.736 "driver_specific": { 00:14:43.736 "nvme": [ 00:14:43.736 { 00:14:43.736 "trid": { 00:14:43.736 "trtype": "TCP", 00:14:43.736 "adrfam": "IPv4", 00:14:43.736 "traddr": "10.0.0.2", 00:14:43.736 "trsvcid": "4420", 00:14:43.736 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:43.736 }, 00:14:43.736 "ctrlr_data": { 00:14:43.736 "cntlid": 1, 00:14:43.736 "vendor_id": "0x8086", 00:14:43.736 "model_number": "SPDK bdev Controller", 00:14:43.736 "serial_number": "SPDK0", 00:14:43.736 "firmware_revision": "24.09", 00:14:43.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:43.736 "oacs": { 00:14:43.736 "security": 0, 00:14:43.736 "format": 0, 00:14:43.736 "firmware": 0, 00:14:43.736 "ns_manage": 0 00:14:43.736 }, 00:14:43.736 "multi_ctrlr": true, 00:14:43.736 "ana_reporting": false 00:14:43.736 }, 00:14:43.736 "vs": { 00:14:43.736 "nvme_version": "1.3" 00:14:43.736 }, 00:14:43.736 "ns_data": { 00:14:43.736 "id": 1, 00:14:43.736 "can_share": true 00:14:43.736 } 00:14:43.736 } 00:14:43.736 ], 00:14:43.736 "mp_policy": "active_passive" 00:14:43.736 } 00:14:43.736 } 00:14:43.736 ] 00:14:43.737 21:30:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2117441 00:14:43.737 21:30:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:43.737 21:30:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:43.737 Running I/O for 10 seconds... 00:14:44.677 Latency(us) 00:14:44.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.677 Nvme0n1 : 1.00 18156.00 70.92 0.00 0.00 0.00 0.00 0.00 00:14:44.677 =================================================================================================================== 00:14:44.677 Total : 18156.00 70.92 0.00 0.00 0.00 0.00 0.00 00:14:44.677 00:14:45.617 21:30:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1149027a-4063-4ba9-b291-dce57517e05c 00:14:45.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.617 Nvme0n1 : 2.00 18285.50 71.43 0.00 0.00 0.00 0.00 0.00 00:14:45.617 =================================================================================================================== 00:14:45.617 Total : 18285.50 71.43 0.00 0.00 0.00 0.00 0.00 00:14:45.617 00:14:45.878 true 00:14:45.878 21:30:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1149027a-4063-4ba9-b291-dce57517e05c 00:14:45.878 21:30:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:45.878 21:30:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:45.878 21:30:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:45.878 21:30:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2117441 00:14:46.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.819 Nvme0n1 : 3.00 18316.33 71.55 0.00 0.00 0.00 0.00 0.00 00:14:46.819 =================================================================================================================== 00:14:46.819 Total : 18316.33 71.55 0.00 0.00 0.00 0.00 0.00 00:14:46.819 00:14:47.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.815 Nvme0n1 : 4.00 18358.75 71.71 0.00 0.00 0.00 0.00 0.00 00:14:47.815 =================================================================================================================== 00:14:47.815 Total : 18358.75 71.71 0.00 0.00 0.00 0.00 0.00 00:14:47.815 00:14:48.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.757 Nvme0n1 : 5.00 18384.80 71.82 0.00 0.00 0.00 0.00 0.00 00:14:48.757 =================================================================================================================== 00:14:48.757 Total : 18384.80 71.82 0.00 0.00 0.00 0.00 0.00 00:14:48.757 00:14:49.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.699 Nvme0n1 : 6.00 18401.17 71.88 0.00 0.00 0.00 0.00 0.00 00:14:49.699 =================================================================================================================== 00:14:49.699 Total : 18401.17 71.88 0.00 0.00 0.00 0.00 0.00 00:14:49.699 00:14:50.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.641 Nvme0n1 : 7.00 18408.29 71.91 0.00 0.00 0.00 0.00 0.00 00:14:50.641 =================================================================================================================== 00:14:50.641 Total : 18408.29 71.91 0.00 0.00 0.00 0.00 0.00 00:14:50.641 00:14:52.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.028 Nvme0n1 : 8.00 18419.38 71.95 0.00 0.00 0.00 0.00 0.00 00:14:52.028 =================================================================================================================== 00:14:52.028 Total : 18419.38 71.95 0.00 0.00 0.00 0.00 0.00 00:14:52.028 00:14:52.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.600 Nvme0n1 : 9.00 18420.78 71.96 0.00 0.00 0.00 0.00 0.00 00:14:52.600 =================================================================================================================== 00:14:52.600 Total : 18420.78 71.96 0.00 0.00 0.00 0.00 0.00 00:14:52.600 00:14:53.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.984 Nvme0n1 : 10.00 18428.20 71.99 0.00 0.00 0.00 0.00 0.00 00:14:53.984 =================================================================================================================== 00:14:53.984 Total : 18428.20 71.99 0.00 0.00 0.00 0.00 0.00 00:14:53.984 00:14:53.984 00:14:53.984 Latency(us) 00:14:53.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.984 Nvme0n1 : 10.01 18431.42 72.00 0.00 0.00 6940.76 4423.68 13653.33 00:14:53.984 =================================================================================================================== 00:14:53.984 Total : 18431.42 72.00 0.00 0.00 6940.76 4423.68 13653.33 00:14:53.984 0 00:14:53.984 21:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2117237 00:14:53.984 21:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2117237 ']' 00:14:53.984 21:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2117237 00:14:53.984 21:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:53.984 21:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:53.984 21:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2117237 00:14:53.984 21:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:53.984 21:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:53.984 21:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2117237' 00:14:53.984 killing process with pid 2117237 00:14:53.984 21:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2117237 00:14:53.984 Received shutdown signal, test time was about 10.000000 seconds 00:14:53.984 00:14:53.984 Latency(us) 00:14:53.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.984 =================================================================================================================== 00:14:53.984 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:53.984 21:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2117237 00:14:53.984 21:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:53.984 21:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:54.245 21:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1149027a-4063-4ba9-b291-dce57517e05c 00:14:54.245 21:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:54.506 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:54.506 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:54.506 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:54.506 [2024-07-15 21:30:44.259450] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:54.506 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1149027a-4063-4ba9-b291-dce57517e05c 00:14:54.506 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:54.506 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1149027a-4063-4ba9-b291-dce57517e05c 00:14:54.506 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.506 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.506 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.506 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.506 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.506 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:54.506 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:54.506 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:54.506 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1149027a-4063-4ba9-b291-dce57517e05c 00:14:54.767 request: 00:14:54.767 { 00:14:54.767 "uuid": "1149027a-4063-4ba9-b291-dce57517e05c", 00:14:54.767 "method": "bdev_lvol_get_lvstores", 00:14:54.767 "req_id": 1 00:14:54.767 } 00:14:54.767 Got JSON-RPC error response 00:14:54.767 response: 00:14:54.767 { 00:14:54.767 "code": -19, 00:14:54.767 "message": "No such device" 00:14:54.767 } 00:14:54.767 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:54.767 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:54.767 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:54.767 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:54.767 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:55.028 aio_bdev 00:14:55.028 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 82f9d5c0-787d-41cf-8ec4-0593ec13e1b1 00:14:55.028 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=82f9d5c0-787d-41cf-8ec4-0593ec13e1b1 00:14:55.028 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:55.028 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:55.028 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:55.028 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:55.028 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:55.028 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 82f9d5c0-787d-41cf-8ec4-0593ec13e1b1 -t 2000 00:14:55.289 [ 00:14:55.289 { 00:14:55.289 "name": "82f9d5c0-787d-41cf-8ec4-0593ec13e1b1", 00:14:55.289 "aliases": [ 00:14:55.289 "lvs/lvol" 00:14:55.289 ], 00:14:55.289 "product_name": "Logical Volume", 00:14:55.289 "block_size": 4096, 00:14:55.289 "num_blocks": 38912, 00:14:55.289 "uuid": "82f9d5c0-787d-41cf-8ec4-0593ec13e1b1", 00:14:55.289 "assigned_rate_limits": { 00:14:55.289 "rw_ios_per_sec": 0, 00:14:55.289 "rw_mbytes_per_sec": 0, 00:14:55.289 "r_mbytes_per_sec": 0, 00:14:55.289 "w_mbytes_per_sec": 0 00:14:55.289 }, 00:14:55.289 "claimed": false, 00:14:55.289 "zoned": false, 00:14:55.289 "supported_io_types": { 00:14:55.289 "read": true, 00:14:55.289 "write": true, 00:14:55.289 "unmap": true, 00:14:55.289 "flush": false, 00:14:55.289 "reset": true, 00:14:55.289 "nvme_admin": false, 00:14:55.289 "nvme_io": false, 00:14:55.289 "nvme_io_md": false, 00:14:55.289 "write_zeroes": true, 00:14:55.289 "zcopy": false, 00:14:55.289 "get_zone_info": false, 00:14:55.289 "zone_management": false, 00:14:55.289 "zone_append": false, 00:14:55.289 "compare": false, 00:14:55.289 "compare_and_write": false, 00:14:55.289 "abort": false, 00:14:55.289 "seek_hole": true, 00:14:55.289 "seek_data": true, 00:14:55.289 "copy": false, 00:14:55.289 "nvme_iov_md": false 00:14:55.289 }, 00:14:55.289 "driver_specific": { 00:14:55.289 "lvol": { 00:14:55.289 "lvol_store_uuid": "1149027a-4063-4ba9-b291-dce57517e05c", 00:14:55.289 "base_bdev": "aio_bdev", 00:14:55.289 "thin_provision": false, 00:14:55.289 "num_allocated_clusters": 38, 00:14:55.289 "snapshot": false, 00:14:55.289 "clone": false, 00:14:55.289 "esnap_clone": false 00:14:55.289 } 00:14:55.289 } 00:14:55.289 } 00:14:55.289 ] 00:14:55.289 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:55.289 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1149027a-4063-4ba9-b291-dce57517e05c 00:14:55.289 21:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:55.289 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:55.289 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1149027a-4063-4ba9-b291-dce57517e05c 00:14:55.289 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:55.550 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:55.550 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 82f9d5c0-787d-41cf-8ec4-0593ec13e1b1 00:14:55.811 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1149027a-4063-4ba9-b291-dce57517e05c 00:14:55.811 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:56.072 00:14:56.072 real 0m15.249s 00:14:56.072 user 0m14.956s 00:14:56.072 sys 0m1.262s 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:56.072 ************************************ 00:14:56.072 END TEST lvs_grow_clean 00:14:56.072 ************************************ 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:56.072 ************************************ 00:14:56.072 START TEST lvs_grow_dirty 00:14:56.072 ************************************ 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:56.072 21:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:56.333 21:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:56.333 21:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:56.593 21:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f1ad28af-e0f9-4282-b4e8-5b80e0518469 00:14:56.593 21:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1ad28af-e0f9-4282-b4e8-5b80e0518469 00:14:56.593 21:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:56.593 21:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:56.593 21:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:56.593 21:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f1ad28af-e0f9-4282-b4e8-5b80e0518469 lvol 150 00:14:56.854 21:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8b273408-abb0-4e9a-a1a4-cb0708fb1ceb 00:14:56.854 21:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:56.854 21:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:56.854 [2024-07-15 21:30:46.632515] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:56.854 [2024-07-15 21:30:46.632565] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:56.854 true 00:14:56.854 21:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1ad28af-e0f9-4282-b4e8-5b80e0518469 00:14:56.854 21:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:57.115 21:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:57.115 21:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:57.375 21:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8b273408-abb0-4e9a-a1a4-cb0708fb1ceb 00:14:57.375 21:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:57.636 [2024-07-15 21:30:47.270465] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.636 21:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:57.897 21:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2120255 00:14:57.897 21:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:57.897 21:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:57.898 21:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2120255 /var/tmp/bdevperf.sock 00:14:57.898 21:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2120255 ']' 00:14:57.898 21:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:57.898 21:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.898 21:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:57.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:57.898 21:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.898 21:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:57.898 [2024-07-15 21:30:47.499721] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:14:57.898 [2024-07-15 21:30:47.499771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120255 ] 00:14:57.898 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.898 [2024-07-15 21:30:47.573999] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.898 [2024-07-15 21:30:47.628054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.468 21:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.468 21:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:58.468 21:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:58.729 Nvme0n1 00:14:58.729 21:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:58.990 [ 00:14:58.990 { 00:14:58.990 "name": "Nvme0n1", 00:14:58.990 "aliases": [ 00:14:58.990 "8b273408-abb0-4e9a-a1a4-cb0708fb1ceb" 00:14:58.990 ], 00:14:58.990 "product_name": "NVMe disk", 00:14:58.990 "block_size": 4096, 00:14:58.990 "num_blocks": 38912, 00:14:58.990 "uuid": "8b273408-abb0-4e9a-a1a4-cb0708fb1ceb", 00:14:58.990 "assigned_rate_limits": { 00:14:58.990 "rw_ios_per_sec": 0, 00:14:58.990 "rw_mbytes_per_sec": 0, 00:14:58.990 "r_mbytes_per_sec": 0, 00:14:58.990 "w_mbytes_per_sec": 0 00:14:58.990 }, 00:14:58.990 "claimed": false, 00:14:58.990 "zoned": false, 00:14:58.990 "supported_io_types": { 00:14:58.990 "read": true, 00:14:58.990 "write": true, 00:14:58.990 "unmap": true, 00:14:58.990 "flush": true, 00:14:58.990 "reset": true, 00:14:58.990 "nvme_admin": true, 00:14:58.990 "nvme_io": true, 00:14:58.990 "nvme_io_md": false, 00:14:58.990 "write_zeroes": true, 00:14:58.990 "zcopy": false, 00:14:58.990 "get_zone_info": false, 00:14:58.990 "zone_management": false, 00:14:58.990 "zone_append": false, 00:14:58.990 "compare": true, 00:14:58.990 "compare_and_write": true, 00:14:58.990 "abort": true, 00:14:58.990 "seek_hole": false, 00:14:58.990 "seek_data": false, 00:14:58.990 "copy": true, 00:14:58.990 "nvme_iov_md": false 00:14:58.990 }, 00:14:58.990 "memory_domains": [ 00:14:58.990 { 00:14:58.990 "dma_device_id": "system", 00:14:58.990 "dma_device_type": 1 00:14:58.990 } 00:14:58.990 ], 00:14:58.990 "driver_specific": { 00:14:58.990 "nvme": [ 00:14:58.990 { 00:14:58.990 "trid": { 00:14:58.990 "trtype": "TCP", 00:14:58.990 "adrfam": "IPv4", 00:14:58.990 "traddr": "10.0.0.2", 00:14:58.990 "trsvcid": "4420", 00:14:58.990 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:58.990 }, 00:14:58.990 "ctrlr_data": { 00:14:58.990 "cntlid": 1, 00:14:58.990 "vendor_id": "0x8086", 00:14:58.990 "model_number": "SPDK bdev Controller", 00:14:58.990 "serial_number": "SPDK0", 00:14:58.990 "firmware_revision": "24.09", 00:14:58.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:58.990 "oacs": { 00:14:58.990 "security": 0, 00:14:58.990 "format": 0, 00:14:58.990 "firmware": 0, 00:14:58.990 "ns_manage": 0 00:14:58.990 }, 00:14:58.990 "multi_ctrlr": true, 00:14:58.990 "ana_reporting": false 00:14:58.990 }, 00:14:58.990 "vs": { 00:14:58.990 "nvme_version": "1.3" 00:14:58.990 }, 00:14:58.990 "ns_data": { 00:14:58.990 "id": 1, 00:14:58.990 "can_share": true 00:14:58.990 } 00:14:58.990 } 00:14:58.990 ], 00:14:58.990 "mp_policy": "active_passive" 00:14:58.990 } 00:14:58.990 } 00:14:58.990 ] 00:14:58.990 21:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2120523 00:14:58.990 21:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:58.990 21:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:58.990 Running I/O for 10 seconds... 00:15:00.393 Latency(us) 00:15:00.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.393 Nvme0n1 : 1.00 18191.00 71.06 0.00 0.00 0.00 0.00 0.00 00:15:00.393 =================================================================================================================== 00:15:00.393 Total : 18191.00 71.06 0.00 0.00 0.00 0.00 0.00 00:15:00.393 00:15:00.962 21:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f1ad28af-e0f9-4282-b4e8-5b80e0518469 00:15:01.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.222 Nvme0n1 : 2.00 18247.50 71.28 0.00 0.00 0.00 0.00 0.00 00:15:01.222 =================================================================================================================== 00:15:01.223 Total : 18247.50 71.28 0.00 0.00 0.00 0.00 0.00 00:15:01.223 00:15:01.223 true 00:15:01.223 21:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1ad28af-e0f9-4282-b4e8-5b80e0518469 00:15:01.223 21:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:01.223 21:30:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:01.223 21:30:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:01.223 21:30:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2120523 00:15:02.213 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.213 Nvme0n1 : 3.00 18297.67 71.48 0.00 0.00 0.00 0.00 0.00 00:15:02.213 =================================================================================================================== 00:15:02.213 Total : 18297.67 71.48 0.00 0.00 0.00 0.00 0.00 00:15:02.213 00:15:03.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.151 Nvme0n1 : 4.00 18339.75 71.64 0.00 0.00 0.00 0.00 0.00 00:15:03.151 =================================================================================================================== 00:15:03.151 Total : 18339.75 71.64 0.00 0.00 0.00 0.00 0.00 00:15:03.151 00:15:04.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.131 Nvme0n1 : 5.00 18358.20 71.71 0.00 0.00 0.00 0.00 0.00 00:15:04.131 =================================================================================================================== 00:15:04.131 Total : 18358.20 71.71 0.00 0.00 0.00 0.00 0.00 00:15:04.131 00:15:05.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.071 Nvme0n1 : 6.00 18383.33 71.81 0.00 0.00 0.00 0.00 0.00 00:15:05.071 =================================================================================================================== 00:15:05.071 Total : 18383.33 71.81 0.00 0.00 0.00 0.00 0.00 00:15:05.071 00:15:06.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.011 Nvme0n1 : 7.00 18397.57 71.87 0.00 0.00 0.00 0.00 0.00 00:15:06.011 =================================================================================================================== 00:15:06.011 Total : 18397.57 71.87 0.00 0.00 0.00 0.00 0.00 00:15:06.011 00:15:07.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.393 Nvme0n1 : 8.00 18411.50 71.92 0.00 0.00 0.00 0.00 0.00 00:15:07.393 =================================================================================================================== 00:15:07.393 Total : 18411.50 71.92 0.00 0.00 0.00 0.00 0.00 00:15:07.393 00:15:08.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.334 Nvme0n1 : 9.00 18419.44 71.95 0.00 0.00 0.00 0.00 0.00 00:15:08.334 =================================================================================================================== 00:15:08.334 Total : 18419.44 71.95 0.00 0.00 0.00 0.00 0.00 00:15:08.334 00:15:09.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.274 Nvme0n1 : 10.00 18433.50 72.01 0.00 0.00 0.00 0.00 0.00 00:15:09.274 =================================================================================================================== 00:15:09.274 Total : 18433.50 72.01 0.00 0.00 0.00 0.00 0.00 00:15:09.274 00:15:09.274 00:15:09.274 Latency(us) 00:15:09.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.274 Nvme0n1 : 10.00 18432.91 72.00 0.00 0.00 6940.71 3399.68 11741.87 00:15:09.274 =================================================================================================================== 00:15:09.274 Total : 18432.91 72.00 0.00 0.00 6940.71 3399.68 11741.87 00:15:09.274 0 00:15:09.274 21:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2120255 00:15:09.274 21:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2120255 ']' 00:15:09.274 21:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2120255 00:15:09.274 21:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:09.274 21:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:09.274 21:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2120255 00:15:09.274 21:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:09.274 21:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:09.274 21:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2120255' 00:15:09.274 killing process with pid 2120255 00:15:09.274 21:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2120255 00:15:09.274 Received shutdown signal, test time was about 10.000000 seconds 00:15:09.274 00:15:09.274 Latency(us) 00:15:09.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.274 =================================================================================================================== 00:15:09.274 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:09.274 21:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2120255 00:15:09.274 21:30:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:09.533 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1ad28af-e0f9-4282-b4e8-5b80e0518469 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2116711 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2116711 00:15:09.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2116711 Killed "${NVMF_APP[@]}" "$@" 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2122629 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2122629 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2122629 ']' 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.792 21:30:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:10.052 [2024-07-15 21:30:59.603105] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:10.052 [2024-07-15 21:30:59.603164] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.052 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.052 [2024-07-15 21:30:59.668901] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.052 [2024-07-15 21:30:59.734970] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.052 [2024-07-15 21:30:59.735004] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.052 [2024-07-15 21:30:59.735012] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.052 [2024-07-15 21:30:59.735018] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.052 [2024-07-15 21:30:59.735023] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.052 [2024-07-15 21:30:59.735041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.621 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.621 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:10.621 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:10.621 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:10.621 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:10.621 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.621 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:10.882 [2024-07-15 21:31:00.563815] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:10.882 [2024-07-15 21:31:00.563897] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:10.882 [2024-07-15 21:31:00.563926] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:10.882 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:10.882 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8b273408-abb0-4e9a-a1a4-cb0708fb1ceb 00:15:10.882 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8b273408-abb0-4e9a-a1a4-cb0708fb1ceb 00:15:10.882 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:10.882 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:10.882 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:10.882 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:10.882 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:11.140 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8b273408-abb0-4e9a-a1a4-cb0708fb1ceb -t 2000 00:15:11.140 [ 00:15:11.140 { 00:15:11.140 "name": "8b273408-abb0-4e9a-a1a4-cb0708fb1ceb", 00:15:11.140 "aliases": [ 00:15:11.140 "lvs/lvol" 00:15:11.140 ], 00:15:11.140 "product_name": "Logical Volume", 00:15:11.140 "block_size": 4096, 00:15:11.140 "num_blocks": 38912, 00:15:11.140 "uuid": "8b273408-abb0-4e9a-a1a4-cb0708fb1ceb", 00:15:11.141 "assigned_rate_limits": { 00:15:11.141 "rw_ios_per_sec": 0, 00:15:11.141 "rw_mbytes_per_sec": 0, 00:15:11.141 "r_mbytes_per_sec": 0, 00:15:11.141 "w_mbytes_per_sec": 0 00:15:11.141 }, 00:15:11.141 "claimed": false, 00:15:11.141 "zoned": false, 00:15:11.141 "supported_io_types": { 00:15:11.141 "read": true, 00:15:11.141 "write": true, 00:15:11.141 "unmap": true, 00:15:11.141 "flush": false, 00:15:11.141 "reset": true, 00:15:11.141 "nvme_admin": false, 00:15:11.141 "nvme_io": false, 00:15:11.141 "nvme_io_md": false, 00:15:11.141 "write_zeroes": true, 00:15:11.141 "zcopy": false, 00:15:11.141 "get_zone_info": false, 00:15:11.141 "zone_management": false, 00:15:11.141 "zone_append": false, 00:15:11.141 "compare": false, 00:15:11.141 "compare_and_write": false, 00:15:11.141 "abort": false, 00:15:11.141 "seek_hole": true, 00:15:11.141 "seek_data": true, 00:15:11.141 "copy": false, 00:15:11.141 "nvme_iov_md": false 00:15:11.141 }, 00:15:11.141 "driver_specific": { 00:15:11.141 "lvol": { 00:15:11.141 "lvol_store_uuid": "f1ad28af-e0f9-4282-b4e8-5b80e0518469", 00:15:11.141 "base_bdev": "aio_bdev", 00:15:11.141 "thin_provision": false, 00:15:11.141 "num_allocated_clusters": 38, 00:15:11.141 "snapshot": false, 00:15:11.141 "clone": false, 00:15:11.141 "esnap_clone": false 00:15:11.141 } 00:15:11.141 } 00:15:11.141 } 00:15:11.141 ] 00:15:11.141 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:11.141 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1ad28af-e0f9-4282-b4e8-5b80e0518469 00:15:11.141 21:31:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:11.399 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:11.399 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1ad28af-e0f9-4282-b4e8-5b80e0518469 00:15:11.399 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:11.659 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:11.659 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:11.659 [2024-07-15 21:31:01.363789] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:11.659 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1ad28af-e0f9-4282-b4e8-5b80e0518469 00:15:11.659 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:11.659 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1ad28af-e0f9-4282-b4e8-5b80e0518469 00:15:11.659 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.659 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.659 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.659 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.659 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.659 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.659 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.659 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:11.659 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1ad28af-e0f9-4282-b4e8-5b80e0518469 00:15:11.918 request: 00:15:11.918 { 00:15:11.918 "uuid": "f1ad28af-e0f9-4282-b4e8-5b80e0518469", 00:15:11.918 "method": "bdev_lvol_get_lvstores", 00:15:11.918 "req_id": 1 00:15:11.918 } 00:15:11.918 Got JSON-RPC error response 00:15:11.918 response: 00:15:11.918 { 00:15:11.918 "code": -19, 00:15:11.918 "message": "No such device" 00:15:11.918 } 00:15:11.918 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:11.918 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:11.918 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:11.918 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:11.918 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:12.177 aio_bdev 00:15:12.177 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8b273408-abb0-4e9a-a1a4-cb0708fb1ceb 00:15:12.177 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8b273408-abb0-4e9a-a1a4-cb0708fb1ceb 00:15:12.177 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:12.177 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:12.177 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:12.177 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:12.177 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:12.177 21:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8b273408-abb0-4e9a-a1a4-cb0708fb1ceb -t 2000 00:15:12.438 [ 00:15:12.438 { 00:15:12.438 "name": "8b273408-abb0-4e9a-a1a4-cb0708fb1ceb", 00:15:12.438 "aliases": [ 00:15:12.438 "lvs/lvol" 00:15:12.438 ], 00:15:12.438 "product_name": "Logical Volume", 00:15:12.438 "block_size": 4096, 00:15:12.438 "num_blocks": 38912, 00:15:12.438 "uuid": "8b273408-abb0-4e9a-a1a4-cb0708fb1ceb", 00:15:12.438 "assigned_rate_limits": { 00:15:12.438 "rw_ios_per_sec": 0, 00:15:12.438 "rw_mbytes_per_sec": 0, 00:15:12.438 "r_mbytes_per_sec": 0, 00:15:12.438 "w_mbytes_per_sec": 0 00:15:12.438 }, 00:15:12.438 "claimed": false, 00:15:12.438 "zoned": false, 00:15:12.438 "supported_io_types": { 00:15:12.438 "read": true, 00:15:12.438 "write": true, 00:15:12.438 "unmap": true, 00:15:12.438 "flush": false, 00:15:12.438 "reset": true, 00:15:12.438 "nvme_admin": false, 00:15:12.438 "nvme_io": false, 00:15:12.438 "nvme_io_md": false, 00:15:12.438 "write_zeroes": true, 00:15:12.438 "zcopy": false, 00:15:12.438 "get_zone_info": false, 00:15:12.438 "zone_management": false, 00:15:12.438 "zone_append": false, 00:15:12.438 "compare": false, 00:15:12.438 "compare_and_write": false, 00:15:12.438 "abort": false, 00:15:12.438 "seek_hole": true, 00:15:12.438 "seek_data": true, 00:15:12.438 "copy": false, 00:15:12.438 "nvme_iov_md": false 00:15:12.438 }, 00:15:12.438 "driver_specific": { 00:15:12.438 "lvol": { 00:15:12.438 "lvol_store_uuid": "f1ad28af-e0f9-4282-b4e8-5b80e0518469", 00:15:12.438 "base_bdev": "aio_bdev", 00:15:12.438 "thin_provision": false, 00:15:12.438 "num_allocated_clusters": 38, 00:15:12.438 "snapshot": false, 00:15:12.438 "clone": false, 00:15:12.438 "esnap_clone": false 00:15:12.438 } 00:15:12.438 } 00:15:12.438 } 00:15:12.438 ] 00:15:12.438 21:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:12.438 21:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1ad28af-e0f9-4282-b4e8-5b80e0518469 00:15:12.438 21:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:12.438 21:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:12.438 21:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1ad28af-e0f9-4282-b4e8-5b80e0518469 00:15:12.438 21:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:12.699 21:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:12.699 21:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8b273408-abb0-4e9a-a1a4-cb0708fb1ceb 00:15:12.959 21:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f1ad28af-e0f9-4282-b4e8-5b80e0518469 00:15:12.959 21:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:13.220 00:15:13.220 real 0m17.051s 00:15:13.220 user 0m44.413s 00:15:13.220 sys 0m2.854s 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:13.220 ************************************ 00:15:13.220 END TEST lvs_grow_dirty 00:15:13.220 ************************************ 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:13.220 nvmf_trace.0 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:13.220 21:31:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:13.220 rmmod nvme_tcp 00:15:13.220 rmmod nvme_fabrics 00:15:13.480 rmmod nvme_keyring 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2122629 ']' 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2122629 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2122629 ']' 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2122629 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2122629 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2122629' 00:15:13.480 killing process with pid 2122629 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2122629 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2122629 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.480 21:31:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.022 21:31:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:16.022 00:15:16.022 real 0m43.018s 00:15:16.022 user 1m5.388s 00:15:16.022 sys 0m9.719s 00:15:16.022 21:31:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:16.022 21:31:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:16.022 ************************************ 00:15:16.022 END TEST nvmf_lvs_grow 00:15:16.022 ************************************ 00:15:16.022 21:31:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:16.022 21:31:05 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:16.022 21:31:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:16.022 21:31:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.022 21:31:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.022 ************************************ 00:15:16.022 START TEST nvmf_bdev_io_wait 00:15:16.022 ************************************ 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:16.022 * Looking for test storage... 00:15:16.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.022 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:16.023 21:31:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:22.607 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:22.607 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:22.607 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.607 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:22.608 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.608 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.868 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.868 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.868 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:22.868 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.868 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.868 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.868 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:23.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:23.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:15:23.128 00:15:23.128 --- 10.0.0.2 ping statistics --- 00:15:23.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.128 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:23.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:23.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:15:23.128 00:15:23.128 --- 10.0.0.1 ping statistics --- 00:15:23.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.128 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2127595 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2127595 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2127595 ']' 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:23.128 21:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:23.128 [2024-07-15 21:31:12.781824] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:23.128 [2024-07-15 21:31:12.781885] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.129 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.129 [2024-07-15 21:31:12.852789] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.129 [2024-07-15 21:31:12.928583] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.129 [2024-07-15 21:31:12.928618] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.129 [2024-07-15 21:31:12.928626] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.129 [2024-07-15 21:31:12.928633] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.129 [2024-07-15 21:31:12.928638] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.129 [2024-07-15 21:31:12.928780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.129 [2024-07-15 21:31:12.928900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.129 [2024-07-15 21:31:12.929062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.129 [2024-07-15 21:31:12.929063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 [2024-07-15 21:31:13.669418] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 Malloc0 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.096 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:24.097 [2024-07-15 21:31:13.741462] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2127769 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2127772 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:24.097 { 00:15:24.097 "params": { 00:15:24.097 "name": "Nvme$subsystem", 00:15:24.097 "trtype": "$TEST_TRANSPORT", 00:15:24.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:24.097 "adrfam": "ipv4", 00:15:24.097 "trsvcid": "$NVMF_PORT", 00:15:24.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:24.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:24.097 "hdgst": ${hdgst:-false}, 00:15:24.097 "ddgst": ${ddgst:-false} 00:15:24.097 }, 00:15:24.097 "method": "bdev_nvme_attach_controller" 00:15:24.097 } 00:15:24.097 EOF 00:15:24.097 )") 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2127775 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2127777 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:24.097 { 00:15:24.097 "params": { 00:15:24.097 "name": "Nvme$subsystem", 00:15:24.097 "trtype": "$TEST_TRANSPORT", 00:15:24.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:24.097 "adrfam": "ipv4", 00:15:24.097 "trsvcid": "$NVMF_PORT", 00:15:24.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:24.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:24.097 "hdgst": ${hdgst:-false}, 00:15:24.097 "ddgst": ${ddgst:-false} 00:15:24.097 }, 00:15:24.097 "method": "bdev_nvme_attach_controller" 00:15:24.097 } 00:15:24.097 EOF 00:15:24.097 )") 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:24.097 { 00:15:24.097 "params": { 00:15:24.097 "name": "Nvme$subsystem", 00:15:24.097 "trtype": "$TEST_TRANSPORT", 00:15:24.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:24.097 "adrfam": "ipv4", 00:15:24.097 "trsvcid": "$NVMF_PORT", 00:15:24.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:24.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:24.097 "hdgst": ${hdgst:-false}, 00:15:24.097 "ddgst": ${ddgst:-false} 00:15:24.097 }, 00:15:24.097 "method": "bdev_nvme_attach_controller" 00:15:24.097 } 00:15:24.097 EOF 00:15:24.097 )") 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:24.097 { 00:15:24.097 "params": { 00:15:24.097 "name": "Nvme$subsystem", 00:15:24.097 "trtype": "$TEST_TRANSPORT", 00:15:24.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:24.097 "adrfam": "ipv4", 00:15:24.097 "trsvcid": "$NVMF_PORT", 00:15:24.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:24.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:24.097 "hdgst": ${hdgst:-false}, 00:15:24.097 "ddgst": ${ddgst:-false} 00:15:24.097 }, 00:15:24.097 "method": "bdev_nvme_attach_controller" 00:15:24.097 } 00:15:24.097 EOF 00:15:24.097 )") 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2127769 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:24.097 "params": { 00:15:24.097 "name": "Nvme1", 00:15:24.097 "trtype": "tcp", 00:15:24.097 "traddr": "10.0.0.2", 00:15:24.097 "adrfam": "ipv4", 00:15:24.097 "trsvcid": "4420", 00:15:24.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:24.097 "hdgst": false, 00:15:24.097 "ddgst": false 00:15:24.097 }, 00:15:24.097 "method": "bdev_nvme_attach_controller" 00:15:24.097 }' 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:24.097 "params": { 00:15:24.097 "name": "Nvme1", 00:15:24.097 "trtype": "tcp", 00:15:24.097 "traddr": "10.0.0.2", 00:15:24.097 "adrfam": "ipv4", 00:15:24.097 "trsvcid": "4420", 00:15:24.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:24.097 "hdgst": false, 00:15:24.097 "ddgst": false 00:15:24.097 }, 00:15:24.097 "method": "bdev_nvme_attach_controller" 00:15:24.097 }' 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:24.097 "params": { 00:15:24.097 "name": "Nvme1", 00:15:24.097 "trtype": "tcp", 00:15:24.097 "traddr": "10.0.0.2", 00:15:24.097 "adrfam": "ipv4", 00:15:24.097 "trsvcid": "4420", 00:15:24.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:24.097 "hdgst": false, 00:15:24.097 "ddgst": false 00:15:24.097 }, 00:15:24.097 "method": "bdev_nvme_attach_controller" 00:15:24.097 }' 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:24.097 21:31:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:24.097 "params": { 00:15:24.097 "name": "Nvme1", 00:15:24.097 "trtype": "tcp", 00:15:24.097 "traddr": "10.0.0.2", 00:15:24.097 "adrfam": "ipv4", 00:15:24.097 "trsvcid": "4420", 00:15:24.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:24.097 "hdgst": false, 00:15:24.097 "ddgst": false 00:15:24.097 }, 00:15:24.097 "method": "bdev_nvme_attach_controller" 00:15:24.097 }' 00:15:24.097 [2024-07-15 21:31:13.795204] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:24.097 [2024-07-15 21:31:13.795252] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:24.097 [2024-07-15 21:31:13.795445] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:24.097 [2024-07-15 21:31:13.795500] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:24.097 [2024-07-15 21:31:13.797508] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:24.097 [2024-07-15 21:31:13.797562] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:24.097 [2024-07-15 21:31:13.798855] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:24.097 [2024-07-15 21:31:13.798898] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:24.097 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.368 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.368 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.368 [2024-07-15 21:31:13.931030] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.368 [2024-07-15 21:31:13.960315] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.368 [2024-07-15 21:31:13.982210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:24.368 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.368 [2024-07-15 21:31:14.010983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:24.368 [2024-07-15 21:31:14.028840] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.368 [2024-07-15 21:31:14.078738] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.368 [2024-07-15 21:31:14.080053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:24.368 [2024-07-15 21:31:14.129138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:24.628 Running I/O for 1 seconds... 00:15:24.628 Running I/O for 1 seconds... 00:15:24.628 Running I/O for 1 seconds... 00:15:24.628 Running I/O for 1 seconds... 00:15:25.570 00:15:25.570 Latency(us) 00:15:25.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.570 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:25.570 Nvme1n1 : 1.02 4454.99 17.40 0.00 0.00 28512.22 8956.59 38884.69 00:15:25.570 =================================================================================================================== 00:15:25.570 Total : 4454.99 17.40 0.00 0.00 28512.22 8956.59 38884.69 00:15:25.570 00:15:25.570 Latency(us) 00:15:25.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.570 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:25.570 Nvme1n1 : 1.01 12736.56 49.75 0.00 0.00 10016.29 5789.01 19442.35 00:15:25.570 =================================================================================================================== 00:15:25.570 Total : 12736.56 49.75 0.00 0.00 10016.29 5789.01 19442.35 00:15:25.570 00:15:25.570 Latency(us) 00:15:25.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.570 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:25.570 Nvme1n1 : 1.00 20971.59 81.92 0.00 0.00 6089.38 3372.37 16493.23 00:15:25.570 =================================================================================================================== 00:15:25.570 Total : 20971.59 81.92 0.00 0.00 6089.38 3372.37 16493.23 00:15:25.831 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2127772 00:15:25.831 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2127775 00:15:25.831 00:15:25.831 Latency(us) 00:15:25.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.831 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:25.831 Nvme1n1 : 1.00 187286.08 731.59 0.00 0.00 680.51 273.07 778.24 00:15:25.831 =================================================================================================================== 00:15:25.831 Total : 187286.08 731.59 0.00 0.00 680.51 273.07 778.24 00:15:25.831 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2127777 00:15:25.831 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.831 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.831 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.831 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.831 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:25.831 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:25.831 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:25.831 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:25.831 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:25.831 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:25.831 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:25.831 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.831 rmmod nvme_tcp 00:15:25.831 rmmod nvme_fabrics 00:15:25.831 rmmod nvme_keyring 00:15:26.091 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:26.091 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:26.091 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2127595 ']' 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2127595 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2127595 ']' 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2127595 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2127595 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2127595' 00:15:26.092 killing process with pid 2127595 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2127595 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2127595 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.092 21:31:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.639 21:31:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:28.639 00:15:28.639 real 0m12.500s 00:15:28.639 user 0m17.909s 00:15:28.639 sys 0m7.044s 00:15:28.639 21:31:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:28.640 21:31:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:28.640 ************************************ 00:15:28.640 END TEST nvmf_bdev_io_wait 00:15:28.640 ************************************ 00:15:28.640 21:31:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:28.640 21:31:17 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:28.640 21:31:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:28.640 21:31:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.640 21:31:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:28.640 ************************************ 00:15:28.640 START TEST nvmf_queue_depth 00:15:28.640 ************************************ 00:15:28.640 21:31:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:28.640 * Looking for test storage... 00:15:28.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:28.640 21:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:35.241 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:35.242 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:35.242 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:35.242 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:35.242 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:35.242 21:31:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:35.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:15:35.502 00:15:35.502 --- 10.0.0.2 ping statistics --- 00:15:35.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.502 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:15:35.502 00:15:35.502 --- 10.0.0.1 ping statistics --- 00:15:35.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.502 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2132307 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2132307 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2132307 ']' 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.502 21:31:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:35.502 [2024-07-15 21:31:25.241666] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:35.502 [2024-07-15 21:31:25.241717] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.502 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.762 [2024-07-15 21:31:25.323466] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.762 [2024-07-15 21:31:25.387306] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.762 [2024-07-15 21:31:25.387342] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.762 [2024-07-15 21:31:25.387349] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.762 [2024-07-15 21:31:25.387356] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.763 [2024-07-15 21:31:25.387362] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.763 [2024-07-15 21:31:25.387381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.333 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.333 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:36.333 21:31:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:36.333 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.333 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:36.333 21:31:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.333 21:31:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:36.333 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.333 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:36.333 [2024-07-15 21:31:26.103205] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.333 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.333 21:31:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:36.333 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.333 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:36.594 Malloc0 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:36.594 [2024-07-15 21:31:26.183409] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2132531 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2132531 /var/tmp/bdevperf.sock 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2132531 ']' 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:36.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:36.594 21:31:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:36.594 [2024-07-15 21:31:26.238660] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:15:36.594 [2024-07-15 21:31:26.238725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2132531 ] 00:15:36.594 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.594 [2024-07-15 21:31:26.302069] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.594 [2024-07-15 21:31:26.376934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.536 21:31:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:37.536 21:31:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:37.536 21:31:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:37.536 21:31:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.536 21:31:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:37.536 NVMe0n1 00:15:37.536 21:31:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.536 21:31:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:37.536 Running I/O for 10 seconds... 00:15:47.528 00:15:47.528 Latency(us) 00:15:47.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.528 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:47.528 Verification LBA range: start 0x0 length 0x4000 00:15:47.528 NVMe0n1 : 10.06 11444.76 44.71 0.00 0.00 89110.51 16274.77 72526.51 00:15:47.528 =================================================================================================================== 00:15:47.528 Total : 11444.76 44.71 0.00 0.00 89110.51 16274.77 72526.51 00:15:47.528 0 00:15:47.528 21:31:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2132531 00:15:47.528 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2132531 ']' 00:15:47.528 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2132531 00:15:47.528 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:47.528 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2132531 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2132531' 00:15:47.788 killing process with pid 2132531 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2132531 00:15:47.788 Received shutdown signal, test time was about 10.000000 seconds 00:15:47.788 00:15:47.788 Latency(us) 00:15:47.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.788 =================================================================================================================== 00:15:47.788 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2132531 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:47.788 rmmod nvme_tcp 00:15:47.788 rmmod nvme_fabrics 00:15:47.788 rmmod nvme_keyring 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2132307 ']' 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2132307 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2132307 ']' 00:15:47.788 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2132307 00:15:48.048 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:48.048 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:48.048 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2132307 00:15:48.048 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:48.048 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:48.048 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2132307' 00:15:48.048 killing process with pid 2132307 00:15:48.048 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2132307 00:15:48.048 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2132307 00:15:48.048 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:48.048 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:48.048 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:48.048 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.048 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:48.048 21:31:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.048 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.048 21:31:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.591 21:31:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:50.591 00:15:50.591 real 0m21.863s 00:15:50.591 user 0m25.553s 00:15:50.591 sys 0m6.407s 00:15:50.591 21:31:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:50.591 21:31:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.591 ************************************ 00:15:50.591 END TEST nvmf_queue_depth 00:15:50.591 ************************************ 00:15:50.591 21:31:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:50.591 21:31:39 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:50.591 21:31:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:50.591 21:31:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.591 21:31:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:50.591 ************************************ 00:15:50.591 START TEST nvmf_target_multipath 00:15:50.591 ************************************ 00:15:50.591 21:31:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:50.591 * Looking for test storage... 00:15:50.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:50.591 21:31:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:57.215 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:57.215 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:57.215 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:57.215 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:57.215 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:57.216 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.216 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:57.216 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:57.216 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:57.216 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:57.216 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:57.216 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:57.216 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:57.216 21:31:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:57.216 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:57.477 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:57.477 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:57.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:15:57.477 00:15:57.477 --- 10.0.0.2 ping statistics --- 00:15:57.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.477 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:15:57.477 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:57.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.404 ms 00:15:57.477 00:15:57.477 --- 10.0.0.1 ping statistics --- 00:15:57.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.477 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:15:57.477 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.477 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:57.477 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:57.477 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.477 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:57.477 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:57.477 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.477 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:57.477 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:57.477 21:31:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:57.478 only one NIC for nvmf test 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:57.478 rmmod nvme_tcp 00:15:57.478 rmmod nvme_fabrics 00:15:57.478 rmmod nvme_keyring 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.478 21:31:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.027 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:00.027 21:31:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:00.027 21:31:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:00.027 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:00.027 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:00.027 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:00.027 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:00.028 00:16:00.028 real 0m9.371s 00:16:00.028 user 0m2.051s 00:16:00.028 sys 0m5.219s 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:00.028 21:31:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:00.028 ************************************ 00:16:00.028 END TEST nvmf_target_multipath 00:16:00.028 ************************************ 00:16:00.028 21:31:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:00.028 21:31:49 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:00.028 21:31:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:00.028 21:31:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:00.028 21:31:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:00.028 ************************************ 00:16:00.028 START TEST nvmf_zcopy 00:16:00.028 ************************************ 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:00.028 * Looking for test storage... 00:16:00.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:00.028 21:31:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:06.615 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:06.615 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:06.615 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:06.615 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:06.615 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.616 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.616 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.616 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.616 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:06.616 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:06.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:16:06.875 00:16:06.875 --- 10.0.0.2 ping statistics --- 00:16:06.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.875 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:16:06.875 00:16:06.875 --- 10.0.0.1 ping statistics --- 00:16:06.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.875 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2142989 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2142989 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2142989 ']' 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.875 21:31:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:06.875 [2024-07-15 21:31:56.619551] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:06.875 [2024-07-15 21:31:56.619599] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.875 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.136 [2024-07-15 21:31:56.701828] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.136 [2024-07-15 21:31:56.764350] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.136 [2024-07-15 21:31:56.764384] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.136 [2024-07-15 21:31:56.764392] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.136 [2024-07-15 21:31:56.764398] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.136 [2024-07-15 21:31:56.764403] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.136 [2024-07-15 21:31:56.764423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:07.708 [2024-07-15 21:31:57.446699] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:07.708 [2024-07-15 21:31:57.462945] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:07.708 malloc0 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:07.708 { 00:16:07.708 "params": { 00:16:07.708 "name": "Nvme$subsystem", 00:16:07.708 "trtype": "$TEST_TRANSPORT", 00:16:07.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:07.708 "adrfam": "ipv4", 00:16:07.708 "trsvcid": "$NVMF_PORT", 00:16:07.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:07.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:07.708 "hdgst": ${hdgst:-false}, 00:16:07.708 "ddgst": ${ddgst:-false} 00:16:07.708 }, 00:16:07.708 "method": "bdev_nvme_attach_controller" 00:16:07.708 } 00:16:07.708 EOF 00:16:07.708 )") 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:07.708 21:31:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:07.969 21:31:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:07.969 21:31:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:07.969 "params": { 00:16:07.969 "name": "Nvme1", 00:16:07.969 "trtype": "tcp", 00:16:07.969 "traddr": "10.0.0.2", 00:16:07.969 "adrfam": "ipv4", 00:16:07.969 "trsvcid": "4420", 00:16:07.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:07.969 "hdgst": false, 00:16:07.969 "ddgst": false 00:16:07.969 }, 00:16:07.969 "method": "bdev_nvme_attach_controller" 00:16:07.969 }' 00:16:07.969 [2024-07-15 21:31:57.550209] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:07.969 [2024-07-15 21:31:57.550279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143097 ] 00:16:07.969 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.969 [2024-07-15 21:31:57.615880] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.969 [2024-07-15 21:31:57.689838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.230 Running I/O for 10 seconds... 00:16:18.225 00:16:18.225 Latency(us) 00:16:18.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.225 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:18.225 Verification LBA range: start 0x0 length 0x1000 00:16:18.225 Nvme1n1 : 10.01 9712.91 75.88 0.00 0.00 13126.66 1856.85 34515.63 00:16:18.225 =================================================================================================================== 00:16:18.225 Total : 9712.91 75.88 0.00 0.00 13126.66 1856.85 34515.63 00:16:18.225 21:32:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:18.225 21:32:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2145193 00:16:18.225 21:32:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:18.225 21:32:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:18.225 21:32:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:18.225 21:32:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:18.225 21:32:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:18.225 21:32:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:18.225 21:32:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:18.225 { 00:16:18.225 "params": { 00:16:18.225 "name": "Nvme$subsystem", 00:16:18.225 "trtype": "$TEST_TRANSPORT", 00:16:18.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.225 "adrfam": "ipv4", 00:16:18.225 "trsvcid": "$NVMF_PORT", 00:16:18.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.225 "hdgst": ${hdgst:-false}, 00:16:18.225 "ddgst": ${ddgst:-false} 00:16:18.225 }, 00:16:18.225 "method": "bdev_nvme_attach_controller" 00:16:18.225 } 00:16:18.225 EOF 00:16:18.225 )") 00:16:18.225 [2024-07-15 21:32:08.007571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.225 [2024-07-15 21:32:08.007600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.225 21:32:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:18.225 21:32:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:18.225 [2024-07-15 21:32:08.015557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.225 [2024-07-15 21:32:08.015565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.225 21:32:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:18.225 21:32:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:18.225 "params": { 00:16:18.225 "name": "Nvme1", 00:16:18.225 "trtype": "tcp", 00:16:18.225 "traddr": "10.0.0.2", 00:16:18.225 "adrfam": "ipv4", 00:16:18.225 "trsvcid": "4420", 00:16:18.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.225 "hdgst": false, 00:16:18.225 "ddgst": false 00:16:18.225 }, 00:16:18.225 "method": "bdev_nvme_attach_controller" 00:16:18.225 }' 00:16:18.225 [2024-07-15 21:32:08.023574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.225 [2024-07-15 21:32:08.023582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.225 [2024-07-15 21:32:08.024922] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:18.225 [2024-07-15 21:32:08.024958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145193 ] 00:16:18.486 [2024-07-15 21:32:08.031595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.486 [2024-07-15 21:32:08.031603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.486 [2024-07-15 21:32:08.039613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.486 [2024-07-15 21:32:08.039621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.486 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.486 [2024-07-15 21:32:08.047633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.486 [2024-07-15 21:32:08.047641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.486 [2024-07-15 21:32:08.055654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.486 [2024-07-15 21:32:08.055661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.486 [2024-07-15 21:32:08.063674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.486 [2024-07-15 21:32:08.063681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.486 [2024-07-15 21:32:08.071695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.486 [2024-07-15 21:32:08.071703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.486 [2024-07-15 21:32:08.077338] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.486 [2024-07-15 21:32:08.079715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.486 [2024-07-15 21:32:08.079722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.486 [2024-07-15 21:32:08.087735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.486 [2024-07-15 21:32:08.087744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.486 [2024-07-15 21:32:08.095756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.486 [2024-07-15 21:32:08.095763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.486 [2024-07-15 21:32:08.103776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.486 [2024-07-15 21:32:08.103784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.486 [2024-07-15 21:32:08.111796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.486 [2024-07-15 21:32:08.111806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.119818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.119829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.127838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.127845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.135860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.135867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.143881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.143889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.146583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.487 [2024-07-15 21:32:08.151900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.151909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.159923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.159932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.167945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.167957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.175964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.175972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.183984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.183992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.192002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.192010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.200023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.200030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.208042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.208049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.220087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.220102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.228098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.228108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.236118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.236132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.244143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.244152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.252162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.252169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.260177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.260187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.268197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.268203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.276218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.276225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.487 [2024-07-15 21:32:08.284241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.487 [2024-07-15 21:32:08.284249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.292264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.292273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.300286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.300295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.308306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.308313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.316327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.316334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.324346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.324353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.332368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.332375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.340387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.340394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.348409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.348418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.356431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.356437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.364452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.364458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.372473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.372479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.380493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.380499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.388514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.388523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.396534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.396541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.404556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.404562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.412576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.412586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.420598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.420605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.428621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.428628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.436642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.436648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.444680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.444694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 Running I/O for 5 seconds... 00:16:18.748 [2024-07-15 21:32:08.452682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.452689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.465928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.465943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.475187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.475203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.484117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.484138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.491967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.491982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.501334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.501349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.509787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.509801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.518899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.518914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.528057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.528071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.536587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.536602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.748 [2024-07-15 21:32:08.545725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:18.748 [2024-07-15 21:32:08.545740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.554020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.554035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.562686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.562700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.571053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.571067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.579379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.579394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.588235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.588249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.597178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.597193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.606182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.606197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.615150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.615164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.623283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.623297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.632253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.632268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.640766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.640781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.649234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.649248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.657782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.657797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.666570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.666584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.675296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.675310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.684291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.684305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.692722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.692736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.701765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.701779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.710682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.710695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.719390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.719404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.728415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.728429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.736655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.736669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.745533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.745547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.754485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.754499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.762924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.762938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.771656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.771670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.780478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.780492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.788840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.788854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.797566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.797580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.806481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.806496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.010 [2024-07-15 21:32:08.814609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.010 [2024-07-15 21:32:08.814623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.823694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.823708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.832139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.832153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.841244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.841259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.849951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.849965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.858918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.858932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.867436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.867450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.876658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.876672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.885413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.885428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.894360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.894375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.902954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.902968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.911548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.911562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.920135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.920149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.928799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.928813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.937481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.937495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.946098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.946114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.954847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.954861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.963340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.963354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.272 [2024-07-15 21:32:08.972238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.272 [2024-07-15 21:32:08.972252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 21:32:08.981169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 21:32:08.981184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 21:32:08.989330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 21:32:08.989344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 21:32:08.998023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 21:32:08.998037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 21:32:09.006872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 21:32:09.006886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 21:32:09.015155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 21:32:09.015169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 21:32:09.023841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 21:32:09.023855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 21:32:09.032398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 21:32:09.032413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 21:32:09.041255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 21:32:09.041269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 21:32:09.050183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 21:32:09.050197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 21:32:09.058680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 21:32:09.058694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 21:32:09.067409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 21:32:09.067427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.273 [2024-07-15 21:32:09.076488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.273 [2024-07-15 21:32:09.076503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.085276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.085290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.094103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.094118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.102605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.102619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.111312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.111326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.119881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.119895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.128428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.128443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.137023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.137038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.145686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.145699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.154710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.154724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.163086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.163100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.172218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.172232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.180612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.180626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.188986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.189000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.197773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.197786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.206592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.206606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.215399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.215413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.224374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.224388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.233361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.233378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.242422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.242436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.250932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.250946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.534 [2024-07-15 21:32:09.259479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.534 [2024-07-15 21:32:09.259493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 21:32:09.268263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 21:32:09.268278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 21:32:09.277014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 21:32:09.277028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 21:32:09.285957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 21:32:09.285971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 21:32:09.294395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 21:32:09.294409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 21:32:09.303440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 21:32:09.303454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 21:32:09.312170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 21:32:09.312183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 21:32:09.320979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 21:32:09.320993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 21:32:09.329566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 21:32:09.329581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.535 [2024-07-15 21:32:09.338884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.535 [2024-07-15 21:32:09.338898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.347486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.347500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.356035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.356049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.364920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.364936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.373542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.373557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.382237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.382252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.391261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.391275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.400086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.400103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.408564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.408578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.417325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.417340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.425998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.426013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.434576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.434590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.443309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.443324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.451734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.451748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.460226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.460240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.468899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.468914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.477450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.477464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.486397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.486411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.495227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.495241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.503919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.503933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.512937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.512952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.521562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.521577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.530576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.530592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.539056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.539071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.547685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.547700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.556595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.556609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.564809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.564827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.572890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.572904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.581721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.581736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.590800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.590814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.796 [2024-07-15 21:32:09.599680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:19.796 [2024-07-15 21:32:09.599694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.058 [2024-07-15 21:32:09.608211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.058 [2024-07-15 21:32:09.608226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.058 [2024-07-15 21:32:09.616736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.058 [2024-07-15 21:32:09.616751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.058 [2024-07-15 21:32:09.624618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.058 [2024-07-15 21:32:09.624632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.058 [2024-07-15 21:32:09.633597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.058 [2024-07-15 21:32:09.633612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.058 [2024-07-15 21:32:09.642321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.058 [2024-07-15 21:32:09.642335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.058 [2024-07-15 21:32:09.650806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.058 [2024-07-15 21:32:09.650821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.058 [2024-07-15 21:32:09.659547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.058 [2024-07-15 21:32:09.659562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.058 [2024-07-15 21:32:09.668183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.058 [2024-07-15 21:32:09.668198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.058 [2024-07-15 21:32:09.677024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.058 [2024-07-15 21:32:09.677040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.058 [2024-07-15 21:32:09.685683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.058 [2024-07-15 21:32:09.685697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.058 [2024-07-15 21:32:09.694000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.058 [2024-07-15 21:32:09.694015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.058 [2024-07-15 21:32:09.702561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.058 [2024-07-15 21:32:09.702575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.058 [2024-07-15 21:32:09.711413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.058 [2024-07-15 21:32:09.711428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.058 [2024-07-15 21:32:09.719730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.058 [2024-07-15 21:32:09.719745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.058 [2024-07-15 21:32:09.728475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.058 [2024-07-15 21:32:09.728489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.058 [2024-07-15 21:32:09.737165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 21:32:09.737180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 21:32:09.745970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 21:32:09.745984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 21:32:09.754594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 21:32:09.754608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 21:32:09.763116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 21:32:09.763135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 21:32:09.772108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 21:32:09.772128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 21:32:09.781069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 21:32:09.781083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 21:32:09.789771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 21:32:09.789785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 21:32:09.798589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 21:32:09.798603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 21:32:09.807204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 21:32:09.807218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 21:32:09.816095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 21:32:09.816110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 21:32:09.825018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 21:32:09.825033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 21:32:09.833427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 21:32:09.833442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 21:32:09.842528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 21:32:09.842543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 21:32:09.851047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 21:32:09.851061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.059 [2024-07-15 21:32:09.859490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.059 [2024-07-15 21:32:09.859505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:09.868408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:09.868423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:09.877365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:09.877380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:09.885491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:09.885505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:09.894258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:09.894273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:09.902754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:09.902769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:09.911628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:09.911642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:09.920146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:09.920160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:09.928981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:09.928996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:09.937997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:09.938012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:09.946611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:09.946626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:09.955639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:09.955654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:09.964541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:09.964556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:09.973484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:09.973499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:09.981844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:09.981859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:09.991021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:09.991035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:09.999554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:09.999569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:10.012634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:10.012651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:10.025768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:10.025785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:10.033970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:10.033985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:10.042673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:10.042688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:10.051588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:10.051603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:10.060461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:10.060476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:10.068901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:10.068916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:10.077611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:10.077626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:10.086631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:10.086646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:10.094834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:10.094850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:10.103636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:10.103651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:10.112757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:10.112772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.320 [2024-07-15 21:32:10.121715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.320 [2024-07-15 21:32:10.121731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.581 [2024-07-15 21:32:10.130635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.581 [2024-07-15 21:32:10.130649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.581 [2024-07-15 21:32:10.139505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.581 [2024-07-15 21:32:10.139520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.581 [2024-07-15 21:32:10.148255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.581 [2024-07-15 21:32:10.148269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.581 [2024-07-15 21:32:10.156860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.581 [2024-07-15 21:32:10.156875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.581 [2024-07-15 21:32:10.165833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.581 [2024-07-15 21:32:10.165848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.581 [2024-07-15 21:32:10.174096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.581 [2024-07-15 21:32:10.174110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.581 [2024-07-15 21:32:10.183339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.581 [2024-07-15 21:32:10.183353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.581 [2024-07-15 21:32:10.191654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.581 [2024-07-15 21:32:10.191668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.581 [2024-07-15 21:32:10.200351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.200365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.209358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.209373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.218143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.218158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.226874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.226888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.235785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.235799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.244841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.244854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.253717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.253732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.262789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.262803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.271820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.271834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.280583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.280597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.288775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.288789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.296960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.296974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.305734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.305748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.314127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.314141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.323014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.323028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.331272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.331287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.340385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.340400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.348730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.348743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.357773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.357787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.366195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.366210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.374670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.374684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.582 [2024-07-15 21:32:10.383450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.582 [2024-07-15 21:32:10.383465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.392040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.392058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.400603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.400617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.409685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.409699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.418467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.418481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.426901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.426915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.435425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.435439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.444161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.444176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.452017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.452031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.461043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.461057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.469937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.469951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.478079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.478093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.487031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.487046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.496019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.496033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.505098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.505113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.513628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.513642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.522383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.522398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.530447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.530461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.539202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.539216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.547984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.547998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.557136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.557153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.565984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.565998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.575074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.575089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.584125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.584139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.593246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.593260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.601982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.601996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.610799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.610813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.620075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.620089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.628554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.628568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.637057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.637072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.847 [2024-07-15 21:32:10.645910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:20.847 [2024-07-15 21:32:10.645924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.654264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.654279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.663195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.663209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.671565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.671579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.680285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.680299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.689080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.689094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.697749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.697764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.706526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.706540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.715384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.715398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.724437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.724457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.733042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.733056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.742217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.742231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.750001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.750016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.759095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.759109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.768121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.768139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.776433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.776447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.785369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.785384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.794269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.794284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.802900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.802914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.811209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.811223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.820454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.820469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.828903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.828917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.837883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.188 [2024-07-15 21:32:10.837897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.188 [2024-07-15 21:32:10.846272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.846287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.855057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.855071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.864107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.864125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.872866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.872880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.881297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.881312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.889864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.889882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.898457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.898472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.907205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.907219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.915801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.915816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.924404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.924419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.932766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.932780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.941129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.941143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.949333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.949347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.958007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.958022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.966534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.966548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.975160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.975174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.983878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.983893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.189 [2024-07-15 21:32:10.992686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.189 [2024-07-15 21:32:10.992701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.001522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.001536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.009861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.009876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.018834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.018848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.027039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.027054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.035788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.035802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.044537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.044551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.053503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.053517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.061861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.061877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.070043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.070058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.078925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.078940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.087550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.087564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.095993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.096008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.104655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.104670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.113169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.113184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.122380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.122395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.130513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.130527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.139468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.139483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.147838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.147853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.156383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.156397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.164758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.164773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.173494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.173509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.181646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.181660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.190015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.190029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.198605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.198620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.207390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.207404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.215702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.215717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.224240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.224254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.233161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.233176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.241765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.241780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.450 [2024-07-15 21:32:11.250639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.450 [2024-07-15 21:32:11.250654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.712 [2024-07-15 21:32:11.259004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.712 [2024-07-15 21:32:11.259018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.712 [2024-07-15 21:32:11.267408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.712 [2024-07-15 21:32:11.267423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.712 [2024-07-15 21:32:11.276183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.712 [2024-07-15 21:32:11.276198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.712 [2024-07-15 21:32:11.284484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.712 [2024-07-15 21:32:11.284498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.712 [2024-07-15 21:32:11.292802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.712 [2024-07-15 21:32:11.292817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.712 [2024-07-15 21:32:11.301755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.712 [2024-07-15 21:32:11.301770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.712 [2024-07-15 21:32:11.310740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.712 [2024-07-15 21:32:11.310755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.712 [2024-07-15 21:32:11.319702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.712 [2024-07-15 21:32:11.319716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.712 [2024-07-15 21:32:11.328341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.712 [2024-07-15 21:32:11.328356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.712 [2024-07-15 21:32:11.336926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.712 [2024-07-15 21:32:11.336941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.712 [2024-07-15 21:32:11.345231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.712 [2024-07-15 21:32:11.345245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.712 [2024-07-15 21:32:11.354291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.354305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.362709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.362724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.371501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.371515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.380484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.380498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.389485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.389500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.398294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.398308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.407166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.407180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.415755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.415770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.424466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.424480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.432983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.432997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.442035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.442049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.450699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.450713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.459187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.459202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.467436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.467451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.476363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.476378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.485045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.485060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.493508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.493522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.502433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.502447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.713 [2024-07-15 21:32:11.511007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.713 [2024-07-15 21:32:11.511022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.520048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.520063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.527960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.527974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.536485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.536499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.545529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.545544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.553759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.553773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.562497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.562512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.570991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.571006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.580059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.580073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.588950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.588966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.598016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.598031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.606657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.606672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.614851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.614866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.623693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.623708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.632438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.632453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.641096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.641111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.649539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.649554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.658164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.658179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.667147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.667161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.676119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.676138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.684639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.684654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.693619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.693633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.702690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.702708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.711078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.711092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.719270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.719285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.728327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.728341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.736485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.736499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.745062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.745077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.753073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.753088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.761899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.761913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:21.974 [2024-07-15 21:32:11.770799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:21.974 [2024-07-15 21:32:11.770813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.779253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.779267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.788300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.788314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.796140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.796153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.805051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.805065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.813592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.813606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.822426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.822440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.831646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.831661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.840115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.840133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.848955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.848969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.857253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.857267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.865535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.865552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.874117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.874135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.883263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.883277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.892201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.892215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.900586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.900600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.917871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.917886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.926065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.926079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.934552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.934566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.942511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.942525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.951697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.951711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.960315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.960330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.969048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.969062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.977261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.977275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.985991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.986005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:11.994330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:11.994344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:12.003402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:12.003416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:12.012361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:12.012376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:12.020150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:12.020164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:12.029136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:12.029150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.235 [2024-07-15 21:32:12.037464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.235 [2024-07-15 21:32:12.037482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.046131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.046145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.054885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.054899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.063895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.063909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.072239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.072253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.080893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.080907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.089777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.089792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.098014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.098028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.106735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.106750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.115377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.115391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.124462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.124476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.132938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.132953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.141595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.141609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.150309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.150323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.159148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.159162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.167211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.167225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.176164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.176179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.185173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.185188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.194068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.194082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.202879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.202896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.211289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.211302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.219618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.219632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.228578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.228592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.237092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.237106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.245602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.245616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.254066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.254080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.262644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.262659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.271342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.271356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.280262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.280276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.289013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.289027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.496 [2024-07-15 21:32:12.298009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.496 [2024-07-15 21:32:12.298023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.306942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.306955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.315999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.316014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.324884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.324898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.333732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.333746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.341514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.341529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.350522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.350536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.359590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.359604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.368082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.368097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.377078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.377092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.385514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.385529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.394414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.394428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.403411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.403425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.412506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.412521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.421531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.421546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.430419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.430433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.439409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.439423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.447642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.447656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.456603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.456617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.465059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.465073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.473276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.473290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.481804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.481818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.490518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.490532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.499551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.499565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.508541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.508555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.516527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.516542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.525548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.525562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.533859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.533874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.542450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.542464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.551134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.551149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.758 [2024-07-15 21:32:12.559305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:22.758 [2024-07-15 21:32:12.559319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.019 [2024-07-15 21:32:12.568079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.019 [2024-07-15 21:32:12.568093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.019 [2024-07-15 21:32:12.577057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.019 [2024-07-15 21:32:12.577072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.019 [2024-07-15 21:32:12.585534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.019 [2024-07-15 21:32:12.585548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.019 [2024-07-15 21:32:12.593973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.019 [2024-07-15 21:32:12.593987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.019 [2024-07-15 21:32:12.602804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.019 [2024-07-15 21:32:12.602818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.019 [2024-07-15 21:32:12.611700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.019 [2024-07-15 21:32:12.611714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.019 [2024-07-15 21:32:12.620648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.019 [2024-07-15 21:32:12.620662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.019 [2024-07-15 21:32:12.629686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.019 [2024-07-15 21:32:12.629700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.019 [2024-07-15 21:32:12.638574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.019 [2024-07-15 21:32:12.638588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.646864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.646878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.655369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.655384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.664176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.664191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.673396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.673410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.682340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.682354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.691379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.691394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.700261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.700276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.709003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.709017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.717760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.717774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.726166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.726181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.734831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.734845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.743330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.743344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.752265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.752279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.760880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.760895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.769409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.769423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.778447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.778463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.787434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.787449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.795750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.795765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.804776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.804790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.813493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.813508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.020 [2024-07-15 21:32:12.822227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.020 [2024-07-15 21:32:12.822242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.830840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.830855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.839411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.839426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.848180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.848195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.856749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.856763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.865105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.865118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.874086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.874101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.882648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.882663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.891602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.891617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.900435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.900449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.909373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.909387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.918491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.918506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.926783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.926799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.935770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.935784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.944794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.944808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.953072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.953086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.961449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.961464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.970556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.970571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.978565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.978579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.987192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.987207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:12.996058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:12.996073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:13.004652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:13.004666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:13.013451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:13.013466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:13.022228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:13.022246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:13.031245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:13.031260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.281 [2024-07-15 21:32:13.040206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.281 [2024-07-15 21:32:13.040220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.282 [2024-07-15 21:32:13.048783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.282 [2024-07-15 21:32:13.048798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.282 [2024-07-15 21:32:13.057474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.282 [2024-07-15 21:32:13.057490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.282 [2024-07-15 21:32:13.066443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.282 [2024-07-15 21:32:13.066458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.282 [2024-07-15 21:32:13.074959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.282 [2024-07-15 21:32:13.074974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.282 [2024-07-15 21:32:13.083213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.282 [2024-07-15 21:32:13.083227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.091833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.091848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.100554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.100569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.109324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.109339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.118202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.118217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.126609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.126624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.135470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.135484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.143849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.143864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.152560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.152575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.161435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.161450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.169753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.169767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.178774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.178789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.187680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.187700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.196149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.196164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.204505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.204519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.212573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.212588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.221449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.221464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.230631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.230646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.238432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.238446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.247690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.247704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.256780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.256795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.265752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.265767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.274672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.274687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.283504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.283519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.291794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.291809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.301004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.301019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.310118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.310137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.318435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.318450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.327439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.327453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.336276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.336291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.544 [2024-07-15 21:32:13.345180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.544 [2024-07-15 21:32:13.345195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.353912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.353930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.362636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.362650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.371738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.371752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.380739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.380754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.389080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.389095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.397148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.397163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.405851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.405865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.414714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.414729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.423863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.423879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.432291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.432306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.441012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.441028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.449780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.449795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.458704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.458719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.466623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.466638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.473036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.473050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 00:16:23.805 Latency(us) 00:16:23.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.805 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:23.805 Nvme1n1 : 5.01 19393.46 151.51 0.00 0.00 6592.81 2648.75 21408.43 00:16:23.805 =================================================================================================================== 00:16:23.805 Total : 19393.46 151.51 0.00 0.00 6592.81 2648.75 21408.43 00:16:23.805 [2024-07-15 21:32:13.481054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.481065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.493093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.493111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.505120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.505136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.517152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.517163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.529180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.529189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.541208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.541216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.549226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.549232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.557245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.557252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.569279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.569288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.577296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.577302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.589334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.589344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.597349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.597355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:23.805 [2024-07-15 21:32:13.605369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:23.805 [2024-07-15 21:32:13.605375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:24.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2145193) - No such process 00:16:24.065 21:32:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2145193 00:16:24.065 21:32:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.065 21:32:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.065 21:32:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.065 21:32:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.065 21:32:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:24.065 21:32:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.065 21:32:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.065 delay0 00:16:24.066 21:32:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.066 21:32:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:24.066 21:32:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.066 21:32:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.066 21:32:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.066 21:32:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:24.066 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.066 [2024-07-15 21:32:13.698862] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:30.644 Initializing NVMe Controllers 00:16:30.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:30.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:30.644 Initialization complete. Launching workers. 00:16:30.644 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 149 00:16:30.644 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 435, failed to submit 34 00:16:30.644 success 227, unsuccess 208, failed 0 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:30.644 rmmod nvme_tcp 00:16:30.644 rmmod nvme_fabrics 00:16:30.644 rmmod nvme_keyring 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2142989 ']' 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2142989 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2142989 ']' 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2142989 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2142989 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2142989' 00:16:30.644 killing process with pid 2142989 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2142989 00:16:30.644 21:32:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2142989 00:16:30.644 21:32:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:30.644 21:32:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:30.644 21:32:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:30.644 21:32:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:30.644 21:32:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:30.644 21:32:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.644 21:32:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.644 21:32:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.560 21:32:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:32.560 00:16:32.560 real 0m32.782s 00:16:32.560 user 0m44.696s 00:16:32.560 sys 0m10.210s 00:16:32.560 21:32:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:32.560 21:32:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:32.560 ************************************ 00:16:32.560 END TEST nvmf_zcopy 00:16:32.560 ************************************ 00:16:32.560 21:32:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:32.560 21:32:22 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:32.560 21:32:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:32.560 21:32:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:32.560 21:32:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:32.560 ************************************ 00:16:32.560 START TEST nvmf_nmic 00:16:32.560 ************************************ 00:16:32.560 21:32:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:32.560 * Looking for test storage... 00:16:32.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:32.560 21:32:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.560 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:32.560 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.560 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.560 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.560 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.560 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.560 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.560 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.560 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:32.561 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:32.820 21:32:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:32.820 21:32:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:32.820 21:32:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:32.820 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:32.820 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.820 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:32.820 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:32.820 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:32.820 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.820 21:32:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.820 21:32:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.820 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:32.820 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:32.820 21:32:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:32.820 21:32:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:39.401 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:39.401 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:39.401 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:39.401 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:39.401 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.402 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.402 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:39.402 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.402 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.402 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:39.402 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:39.402 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:39.402 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.662 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.662 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.662 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:39.662 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.662 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.662 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.662 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:39.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:16:39.662 00:16:39.662 --- 10.0.0.2 ping statistics --- 00:16:39.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.662 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:16:39.662 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:16:39.662 00:16:39.663 --- 10.0.0.1 ping statistics --- 00:16:39.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.663 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2151682 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2151682 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2151682 ']' 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:39.663 21:32:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:39.923 [2024-07-15 21:32:29.496993] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:39.923 [2024-07-15 21:32:29.497059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.923 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.923 [2024-07-15 21:32:29.568344] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.923 [2024-07-15 21:32:29.644433] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.923 [2024-07-15 21:32:29.644471] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.923 [2024-07-15 21:32:29.644479] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.923 [2024-07-15 21:32:29.644485] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.923 [2024-07-15 21:32:29.644491] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.923 [2024-07-15 21:32:29.644629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.923 [2024-07-15 21:32:29.644750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.923 [2024-07-15 21:32:29.644911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.923 [2024-07-15 21:32:29.644912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.494 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.494 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:40.494 21:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:40.494 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:40.494 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:40.755 [2024-07-15 21:32:30.327747] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:40.755 Malloc0 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:40.755 [2024-07-15 21:32:30.371017] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:40.755 test case1: single bdev can't be used in multiple subsystems 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:40.755 [2024-07-15 21:32:30.394921] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:40.755 [2024-07-15 21:32:30.394940] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:40.755 [2024-07-15 21:32:30.394948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.755 request: 00:16:40.755 { 00:16:40.755 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:40.755 "namespace": { 00:16:40.755 "bdev_name": "Malloc0", 00:16:40.755 "no_auto_visible": false 00:16:40.755 }, 00:16:40.755 "method": "nvmf_subsystem_add_ns", 00:16:40.755 "req_id": 1 00:16:40.755 } 00:16:40.755 Got JSON-RPC error response 00:16:40.755 response: 00:16:40.755 { 00:16:40.755 "code": -32602, 00:16:40.755 "message": "Invalid parameters" 00:16:40.755 } 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:40.755 Adding namespace failed - expected result. 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:40.755 test case2: host connect to nvmf target in multiple paths 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:40.755 [2024-07-15 21:32:30.407055] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.755 21:32:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:42.138 21:32:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:44.084 21:32:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:44.084 21:32:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:44.084 21:32:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:44.084 21:32:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:44.084 21:32:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:46.022 21:32:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:46.022 21:32:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:46.022 21:32:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.023 21:32:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:46.023 21:32:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.023 21:32:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:46.023 21:32:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:46.023 [global] 00:16:46.023 thread=1 00:16:46.023 invalidate=1 00:16:46.023 rw=write 00:16:46.023 time_based=1 00:16:46.023 runtime=1 00:16:46.023 ioengine=libaio 00:16:46.023 direct=1 00:16:46.023 bs=4096 00:16:46.023 iodepth=1 00:16:46.023 norandommap=0 00:16:46.023 numjobs=1 00:16:46.023 00:16:46.023 verify_dump=1 00:16:46.023 verify_backlog=512 00:16:46.023 verify_state_save=0 00:16:46.023 do_verify=1 00:16:46.023 verify=crc32c-intel 00:16:46.023 [job0] 00:16:46.023 filename=/dev/nvme0n1 00:16:46.023 Could not set queue depth (nvme0n1) 00:16:46.287 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:46.287 fio-3.35 00:16:46.287 Starting 1 thread 00:16:47.672 00:16:47.672 job0: (groupid=0, jobs=1): err= 0: pid=2153041: Mon Jul 15 21:32:37 2024 00:16:47.672 read: IOPS=14, BW=57.6KiB/s (59.0kB/s)(60.0KiB/1041msec) 00:16:47.672 slat (nsec): min=6647, max=27034, avg=23394.80, stdev=6168.25 00:16:47.672 clat (usec): min=1150, max=42959, avg=39379.74, stdev=10581.28 00:16:47.672 lat (usec): min=1160, max=42984, avg=39403.13, stdev=10585.00 00:16:47.672 clat percentiles (usec): 00:16:47.672 | 1.00th=[ 1156], 5.00th=[ 1156], 10.00th=[41681], 20.00th=[41681], 00:16:47.672 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:47.672 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:16:47.672 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:47.672 | 99.99th=[42730] 00:16:47.672 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:16:47.672 slat (nsec): min=9499, max=65682, avg=32668.52, stdev=5780.72 00:16:47.672 clat (usec): min=531, max=1028, avg=838.21, stdev=89.75 00:16:47.672 lat (usec): min=541, max=1062, avg=870.88, stdev=90.73 00:16:47.672 clat percentiles (usec): 00:16:47.672 | 1.00th=[ 586], 5.00th=[ 676], 10.00th=[ 725], 20.00th=[ 758], 00:16:47.672 | 30.00th=[ 791], 40.00th=[ 832], 50.00th=[ 857], 60.00th=[ 873], 00:16:47.672 | 70.00th=[ 889], 80.00th=[ 914], 90.00th=[ 947], 95.00th=[ 963], 00:16:47.672 | 99.00th=[ 988], 99.50th=[ 1004], 99.90th=[ 1029], 99.95th=[ 1029], 00:16:47.672 | 99.99th=[ 1029] 00:16:47.672 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:47.672 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:47.672 lat (usec) : 750=17.27%, 1000=79.32% 00:16:47.672 lat (msec) : 2=0.76%, 50=2.66% 00:16:47.672 cpu : usr=1.44%, sys=1.63%, ctx=527, majf=0, minf=1 00:16:47.672 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:47.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.672 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.672 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:47.672 00:16:47.672 Run status group 0 (all jobs): 00:16:47.672 READ: bw=57.6KiB/s (59.0kB/s), 57.6KiB/s-57.6KiB/s (59.0kB/s-59.0kB/s), io=60.0KiB (61.4kB), run=1041-1041msec 00:16:47.672 WRITE: bw=1967KiB/s (2015kB/s), 1967KiB/s-1967KiB/s (2015kB/s-2015kB/s), io=2048KiB (2097kB), run=1041-1041msec 00:16:47.672 00:16:47.672 Disk stats (read/write): 00:16:47.672 nvme0n1: ios=61/512, merge=0/0, ticks=477/302, in_queue=779, util=93.09% 00:16:47.672 21:32:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:47.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:47.672 21:32:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:47.672 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:47.672 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:47.672 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:47.672 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:47.672 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:47.672 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:47.672 21:32:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:47.672 21:32:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:47.672 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:47.672 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:47.672 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:47.672 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:47.672 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:47.672 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:47.672 rmmod nvme_tcp 00:16:47.672 rmmod nvme_fabrics 00:16:47.672 rmmod nvme_keyring 00:16:47.673 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:47.673 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:47.673 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:47.673 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2151682 ']' 00:16:47.673 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2151682 00:16:47.673 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2151682 ']' 00:16:47.673 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2151682 00:16:47.673 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:47.673 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.673 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2151682 00:16:47.673 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:47.673 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:47.673 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2151682' 00:16:47.673 killing process with pid 2151682 00:16:47.673 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2151682 00:16:47.673 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2151682 00:16:47.934 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:47.934 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:47.934 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:47.934 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:47.934 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:47.934 21:32:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.934 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.934 21:32:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.849 21:32:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:49.849 00:16:49.849 real 0m17.372s 00:16:49.849 user 0m48.481s 00:16:49.849 sys 0m6.017s 00:16:49.849 21:32:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:49.849 21:32:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:49.849 ************************************ 00:16:49.849 END TEST nvmf_nmic 00:16:49.849 ************************************ 00:16:49.849 21:32:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:49.849 21:32:39 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:49.849 21:32:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:49.849 21:32:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:49.849 21:32:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:50.110 ************************************ 00:16:50.110 START TEST nvmf_fio_target 00:16:50.110 ************************************ 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:50.110 * Looking for test storage... 00:16:50.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.110 21:32:39 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:50.111 21:32:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.693 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:56.693 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:56.693 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:56.693 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:56.693 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:56.693 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:56.693 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:56.955 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:56.955 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:56.955 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:56.955 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:56.955 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:57.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:16:57.216 00:16:57.216 --- 10.0.0.2 ping statistics --- 00:16:57.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.216 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:57.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:16:57.216 00:16:57.216 --- 10.0.0.1 ping statistics --- 00:16:57.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.216 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2157508 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2157508 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2157508 ']' 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.216 21:32:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.216 [2024-07-15 21:32:46.893763] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:16:57.216 [2024-07-15 21:32:46.893826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.216 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.216 [2024-07-15 21:32:46.964328] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.477 [2024-07-15 21:32:47.040055] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.477 [2024-07-15 21:32:47.040089] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.477 [2024-07-15 21:32:47.040097] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.477 [2024-07-15 21:32:47.040104] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.477 [2024-07-15 21:32:47.040109] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.477 [2024-07-15 21:32:47.040181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.477 [2024-07-15 21:32:47.040298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.477 [2024-07-15 21:32:47.040480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.477 [2024-07-15 21:32:47.040481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.047 21:32:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.047 21:32:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:16:58.047 21:32:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:58.047 21:32:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:58.047 21:32:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.047 21:32:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.047 21:32:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:58.308 [2024-07-15 21:32:47.854238] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.308 21:32:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:58.308 21:32:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:58.308 21:32:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:58.569 21:32:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:58.569 21:32:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:58.830 21:32:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:58.830 21:32:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:58.830 21:32:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:58.830 21:32:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:59.091 21:32:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:59.352 21:32:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:59.352 21:32:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:59.352 21:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:59.352 21:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:59.612 21:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:59.612 21:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:59.872 21:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:59.872 21:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:59.872 21:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:00.133 21:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:00.133 21:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:00.393 21:32:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.393 [2024-07-15 21:32:50.108166] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.393 21:32:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:00.653 21:32:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:00.913 21:32:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:02.297 21:32:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:02.297 21:32:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:02.297 21:32:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:02.297 21:32:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:02.297 21:32:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:02.297 21:32:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:04.208 21:32:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:04.208 21:32:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:04.208 21:32:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:04.208 21:32:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:04.208 21:32:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:04.208 21:32:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:04.208 21:32:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:04.208 [global] 00:17:04.208 thread=1 00:17:04.208 invalidate=1 00:17:04.208 rw=write 00:17:04.208 time_based=1 00:17:04.208 runtime=1 00:17:04.208 ioengine=libaio 00:17:04.208 direct=1 00:17:04.208 bs=4096 00:17:04.208 iodepth=1 00:17:04.208 norandommap=0 00:17:04.208 numjobs=1 00:17:04.208 00:17:04.208 verify_dump=1 00:17:04.208 verify_backlog=512 00:17:04.208 verify_state_save=0 00:17:04.208 do_verify=1 00:17:04.208 verify=crc32c-intel 00:17:04.535 [job0] 00:17:04.535 filename=/dev/nvme0n1 00:17:04.535 [job1] 00:17:04.535 filename=/dev/nvme0n2 00:17:04.535 [job2] 00:17:04.535 filename=/dev/nvme0n3 00:17:04.535 [job3] 00:17:04.535 filename=/dev/nvme0n4 00:17:04.535 Could not set queue depth (nvme0n1) 00:17:04.535 Could not set queue depth (nvme0n2) 00:17:04.535 Could not set queue depth (nvme0n3) 00:17:04.535 Could not set queue depth (nvme0n4) 00:17:04.805 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:04.805 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:04.805 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:04.805 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:04.805 fio-3.35 00:17:04.805 Starting 4 threads 00:17:06.224 00:17:06.224 job0: (groupid=0, jobs=1): err= 0: pid=2159144: Mon Jul 15 21:32:55 2024 00:17:06.224 read: IOPS=181, BW=724KiB/s (742kB/s)(736KiB/1016msec) 00:17:06.224 slat (nsec): min=4554, max=47209, avg=25005.71, stdev=5186.42 00:17:06.224 clat (usec): min=597, max=42862, avg=3488.64, stdev=9792.59 00:17:06.224 lat (usec): min=623, max=42887, avg=3513.65, stdev=9792.36 00:17:06.224 clat percentiles (usec): 00:17:06.224 | 1.00th=[ 603], 5.00th=[ 783], 10.00th=[ 840], 20.00th=[ 906], 00:17:06.224 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1020], 00:17:06.224 | 70.00th=[ 1057], 80.00th=[ 1156], 90.00th=[ 1418], 95.00th=[41681], 00:17:06.224 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:06.224 | 99.99th=[42730] 00:17:06.224 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:17:06.224 slat (usec): min=9, max=31088, avg=91.25, stdev=1372.61 00:17:06.224 clat (usec): min=195, max=873, avg=620.21, stdev=120.72 00:17:06.224 lat (usec): min=205, max=31630, avg=711.46, stdev=1374.73 00:17:06.224 clat percentiles (usec): 00:17:06.224 | 1.00th=[ 233], 5.00th=[ 396], 10.00th=[ 461], 20.00th=[ 529], 00:17:06.224 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 635], 60.00th=[ 668], 00:17:06.224 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 783], 00:17:06.224 | 99.00th=[ 816], 99.50th=[ 832], 99.90th=[ 873], 99.95th=[ 873], 00:17:06.224 | 99.99th=[ 873] 00:17:06.224 bw ( KiB/s): min= 4096, max= 4096, per=50.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:06.224 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:06.224 lat (usec) : 250=0.86%, 500=9.05%, 750=54.60%, 1000=22.99% 00:17:06.224 lat (msec) : 2=10.78%, 10=0.14%, 50=1.58% 00:17:06.224 cpu : usr=1.28%, sys=1.67%, ctx=699, majf=0, minf=1 00:17:06.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:06.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.224 issued rwts: total=184,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:06.224 job1: (groupid=0, jobs=1): err= 0: pid=2159145: Mon Jul 15 21:32:55 2024 00:17:06.224 read: IOPS=107, BW=432KiB/s (442kB/s)(432KiB/1001msec) 00:17:06.224 slat (nsec): min=16503, max=92657, avg=24768.18, stdev=7013.76 00:17:06.224 clat (usec): min=671, max=42103, avg=5080.90, stdev=11842.72 00:17:06.224 lat (usec): min=695, max=42126, avg=5105.67, stdev=11842.42 00:17:06.224 clat percentiles (usec): 00:17:06.224 | 1.00th=[ 832], 5.00th=[ 1074], 10.00th=[ 1139], 20.00th=[ 1188], 00:17:06.224 | 30.00th=[ 1221], 40.00th=[ 1254], 50.00th=[ 1270], 60.00th=[ 1319], 00:17:06.224 | 70.00th=[ 1369], 80.00th=[ 1434], 90.00th=[ 5276], 95.00th=[42206], 00:17:06.224 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:06.224 | 99.99th=[42206] 00:17:06.224 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:06.224 slat (usec): min=4, max=119, avg=29.64, stdev= 8.04 00:17:06.224 clat (usec): min=444, max=4115, avg=838.87, stdev=201.59 00:17:06.224 lat (usec): min=454, max=4126, avg=868.51, stdev=202.42 00:17:06.224 clat percentiles (usec): 00:17:06.224 | 1.00th=[ 545], 5.00th=[ 603], 10.00th=[ 668], 20.00th=[ 717], 00:17:06.224 | 30.00th=[ 758], 40.00th=[ 791], 50.00th=[ 824], 60.00th=[ 857], 00:17:06.224 | 70.00th=[ 914], 80.00th=[ 963], 90.00th=[ 1012], 95.00th=[ 1057], 00:17:06.224 | 99.00th=[ 1205], 99.50th=[ 1254], 99.90th=[ 4113], 99.95th=[ 4113], 00:17:06.224 | 99.99th=[ 4113] 00:17:06.224 bw ( KiB/s): min= 4096, max= 4096, per=50.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:06.224 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:06.224 lat (usec) : 500=0.81%, 750=22.58%, 1000=49.03% 00:17:06.224 lat (msec) : 2=25.65%, 10=0.32%, 50=1.61% 00:17:06.224 cpu : usr=0.60%, sys=2.10%, ctx=621, majf=0, minf=1 00:17:06.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:06.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.224 issued rwts: total=108,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:06.224 job2: (groupid=0, jobs=1): err= 0: pid=2159146: Mon Jul 15 21:32:55 2024 00:17:06.224 read: IOPS=400, BW=1601KiB/s (1639kB/s)(1604KiB/1002msec) 00:17:06.224 slat (nsec): min=7548, max=66401, avg=26346.00, stdev=3868.73 00:17:06.224 clat (usec): min=983, max=11573, avg=1330.71, stdev=519.91 00:17:06.224 lat (usec): min=1009, max=11599, avg=1357.06, stdev=519.93 00:17:06.224 clat percentiles (usec): 00:17:06.224 | 1.00th=[ 1090], 5.00th=[ 1172], 10.00th=[ 1188], 20.00th=[ 1237], 00:17:06.224 | 30.00th=[ 1270], 40.00th=[ 1303], 50.00th=[ 1319], 60.00th=[ 1336], 00:17:06.224 | 70.00th=[ 1352], 80.00th=[ 1385], 90.00th=[ 1401], 95.00th=[ 1418], 00:17:06.224 | 99.00th=[ 1483], 99.50th=[ 1549], 99.90th=[11600], 99.95th=[11600], 00:17:06.224 | 99.99th=[11600] 00:17:06.224 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:17:06.224 slat (nsec): min=10371, max=55973, avg=33507.60, stdev=6208.79 00:17:06.224 clat (usec): min=230, max=1107, avg=843.51, stdev=123.89 00:17:06.224 lat (usec): min=266, max=1141, avg=877.02, stdev=125.25 00:17:06.224 clat percentiles (usec): 00:17:06.224 | 1.00th=[ 457], 5.00th=[ 619], 10.00th=[ 693], 20.00th=[ 766], 00:17:06.224 | 30.00th=[ 799], 40.00th=[ 840], 50.00th=[ 865], 60.00th=[ 889], 00:17:06.224 | 70.00th=[ 914], 80.00th=[ 938], 90.00th=[ 971], 95.00th=[ 1012], 00:17:06.224 | 99.00th=[ 1057], 99.50th=[ 1074], 99.90th=[ 1106], 99.95th=[ 1106], 00:17:06.224 | 99.99th=[ 1106] 00:17:06.224 bw ( KiB/s): min= 4096, max= 4096, per=50.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:06.224 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:06.224 lat (usec) : 250=0.11%, 500=1.10%, 750=8.32%, 1000=43.37% 00:17:06.224 lat (msec) : 2=46.99%, 20=0.11% 00:17:06.224 cpu : usr=0.90%, sys=3.40%, ctx=914, majf=0, minf=1 00:17:06.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:06.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.224 issued rwts: total=401,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:06.225 job3: (groupid=0, jobs=1): err= 0: pid=2159148: Mon Jul 15 21:32:55 2024 00:17:06.225 read: IOPS=385, BW=1542KiB/s (1579kB/s)(1544KiB/1001msec) 00:17:06.225 slat (nsec): min=7496, max=56436, avg=26137.90, stdev=4175.42 00:17:06.225 clat (usec): min=577, max=42800, avg=1468.97, stdev=3207.38 00:17:06.225 lat (usec): min=603, max=42826, avg=1495.11, stdev=3207.32 00:17:06.225 clat percentiles (usec): 00:17:06.225 | 1.00th=[ 758], 5.00th=[ 930], 10.00th=[ 1004], 20.00th=[ 1074], 00:17:06.225 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1188], 00:17:06.225 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1287], 95.00th=[ 1352], 00:17:06.225 | 99.00th=[13304], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:06.225 | 99.99th=[42730] 00:17:06.225 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:06.225 slat (usec): min=10, max=29800, avg=89.40, stdev=1315.64 00:17:06.225 clat (usec): min=212, max=1164, avg=722.85, stdev=159.36 00:17:06.225 lat (usec): min=246, max=30685, avg=812.24, stdev=1332.51 00:17:06.225 clat percentiles (usec): 00:17:06.225 | 1.00th=[ 314], 5.00th=[ 441], 10.00th=[ 510], 20.00th=[ 586], 00:17:06.225 | 30.00th=[ 652], 40.00th=[ 693], 50.00th=[ 742], 60.00th=[ 775], 00:17:06.225 | 70.00th=[ 816], 80.00th=[ 857], 90.00th=[ 922], 95.00th=[ 971], 00:17:06.225 | 99.00th=[ 1045], 99.50th=[ 1090], 99.90th=[ 1172], 99.95th=[ 1172], 00:17:06.225 | 99.99th=[ 1172] 00:17:06.225 bw ( KiB/s): min= 4096, max= 4096, per=50.80%, avg=4096.00, stdev= 0.00, samples=1 00:17:06.225 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:06.225 lat (usec) : 250=0.11%, 500=5.46%, 750=25.28%, 1000=28.73% 00:17:06.225 lat (msec) : 2=39.87%, 20=0.33%, 50=0.22% 00:17:06.225 cpu : usr=1.20%, sys=2.80%, ctx=901, majf=0, minf=1 00:17:06.225 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:06.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.225 issued rwts: total=386,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.225 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:06.225 00:17:06.225 Run status group 0 (all jobs): 00:17:06.225 READ: bw=4248KiB/s (4350kB/s), 432KiB/s-1601KiB/s (442kB/s-1639kB/s), io=4316KiB (4420kB), run=1001-1016msec 00:17:06.225 WRITE: bw=8063KiB/s (8257kB/s), 2016KiB/s-2046KiB/s (2064kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1016msec 00:17:06.225 00:17:06.225 Disk stats (read/write): 00:17:06.225 nvme0n1: ios=229/512, merge=0/0, ticks=1043/307, in_queue=1350, util=85.97% 00:17:06.225 nvme0n2: ios=59/512, merge=0/0, ticks=423/414, in_queue=837, util=85.42% 00:17:06.225 nvme0n3: ios=290/512, merge=0/0, ticks=870/401, in_queue=1271, util=96.08% 00:17:06.225 nvme0n4: ios=255/512, merge=0/0, ticks=1346/351, in_queue=1697, util=98.50% 00:17:06.225 21:32:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:06.225 [global] 00:17:06.225 thread=1 00:17:06.225 invalidate=1 00:17:06.225 rw=randwrite 00:17:06.225 time_based=1 00:17:06.225 runtime=1 00:17:06.225 ioengine=libaio 00:17:06.225 direct=1 00:17:06.225 bs=4096 00:17:06.225 iodepth=1 00:17:06.225 norandommap=0 00:17:06.225 numjobs=1 00:17:06.225 00:17:06.225 verify_dump=1 00:17:06.225 verify_backlog=512 00:17:06.225 verify_state_save=0 00:17:06.225 do_verify=1 00:17:06.225 verify=crc32c-intel 00:17:06.225 [job0] 00:17:06.225 filename=/dev/nvme0n1 00:17:06.225 [job1] 00:17:06.225 filename=/dev/nvme0n2 00:17:06.225 [job2] 00:17:06.225 filename=/dev/nvme0n3 00:17:06.225 [job3] 00:17:06.225 filename=/dev/nvme0n4 00:17:06.225 Could not set queue depth (nvme0n1) 00:17:06.225 Could not set queue depth (nvme0n2) 00:17:06.225 Could not set queue depth (nvme0n3) 00:17:06.225 Could not set queue depth (nvme0n4) 00:17:06.488 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:06.488 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:06.488 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:06.488 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:06.488 fio-3.35 00:17:06.488 Starting 4 threads 00:17:07.891 00:17:07.892 job0: (groupid=0, jobs=1): err= 0: pid=2159673: Mon Jul 15 21:32:57 2024 00:17:07.892 read: IOPS=17, BW=69.4KiB/s (71.1kB/s)(72.0KiB/1037msec) 00:17:07.892 slat (nsec): min=24478, max=25239, avg=24811.94, stdev=207.66 00:17:07.892 clat (usec): min=812, max=43011, avg=35283.27, stdev=15736.79 00:17:07.892 lat (usec): min=837, max=43036, avg=35308.08, stdev=15736.72 00:17:07.892 clat percentiles (usec): 00:17:07.892 | 1.00th=[ 816], 5.00th=[ 816], 10.00th=[ 1074], 20.00th=[41681], 00:17:07.892 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:07.892 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:17:07.892 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:07.892 | 99.99th=[43254] 00:17:07.892 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:17:07.892 slat (nsec): min=10209, max=65909, avg=31647.37, stdev=5572.53 00:17:07.892 clat (usec): min=256, max=1323, avg=735.70, stdev=184.16 00:17:07.892 lat (usec): min=288, max=1354, avg=767.34, stdev=184.91 00:17:07.892 clat percentiles (usec): 00:17:07.892 | 1.00th=[ 310], 5.00th=[ 416], 10.00th=[ 478], 20.00th=[ 553], 00:17:07.892 | 30.00th=[ 635], 40.00th=[ 725], 50.00th=[ 766], 60.00th=[ 807], 00:17:07.892 | 70.00th=[ 857], 80.00th=[ 889], 90.00th=[ 922], 95.00th=[ 971], 00:17:07.892 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 1319], 99.95th=[ 1319], 00:17:07.892 | 99.99th=[ 1319] 00:17:07.892 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:17:07.892 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:07.892 lat (usec) : 500=12.26%, 750=31.13%, 1000=49.43% 00:17:07.892 lat (msec) : 2=4.34%, 50=2.83% 00:17:07.892 cpu : usr=0.77%, sys=1.64%, ctx=533, majf=0, minf=1 00:17:07.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.892 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.892 job1: (groupid=0, jobs=1): err= 0: pid=2159674: Mon Jul 15 21:32:57 2024 00:17:07.892 read: IOPS=191, BW=767KiB/s (786kB/s)(768KiB/1001msec) 00:17:07.892 slat (nsec): min=7828, max=42211, avg=24271.46, stdev=2565.23 00:17:07.892 clat (usec): min=969, max=42074, avg=2944.38, stdev=8133.65 00:17:07.892 lat (usec): min=993, max=42099, avg=2968.65, stdev=8133.69 00:17:07.892 clat percentiles (usec): 00:17:07.892 | 1.00th=[ 971], 5.00th=[ 1057], 10.00th=[ 1090], 20.00th=[ 1106], 00:17:07.892 | 30.00th=[ 1156], 40.00th=[ 1188], 50.00th=[ 1237], 60.00th=[ 1287], 00:17:07.892 | 70.00th=[ 1336], 80.00th=[ 1369], 90.00th=[ 1450], 95.00th=[ 1614], 00:17:07.892 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:07.892 | 99.99th=[42206] 00:17:07.892 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:07.892 slat (nsec): min=9232, max=51962, avg=28621.71, stdev=6996.81 00:17:07.892 clat (usec): min=402, max=1173, avg=802.10, stdev=113.94 00:17:07.892 lat (usec): min=412, max=1204, avg=830.72, stdev=116.48 00:17:07.892 clat percentiles (usec): 00:17:07.892 | 1.00th=[ 502], 5.00th=[ 594], 10.00th=[ 660], 20.00th=[ 717], 00:17:07.892 | 30.00th=[ 758], 40.00th=[ 783], 50.00th=[ 807], 60.00th=[ 824], 00:17:07.892 | 70.00th=[ 865], 80.00th=[ 898], 90.00th=[ 938], 95.00th=[ 971], 00:17:07.892 | 99.00th=[ 1037], 99.50th=[ 1139], 99.90th=[ 1172], 99.95th=[ 1172], 00:17:07.892 | 99.99th=[ 1172] 00:17:07.892 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:17:07.892 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:07.892 lat (usec) : 500=0.71%, 750=20.03%, 1000=50.00% 00:17:07.892 lat (msec) : 2=27.98%, 10=0.14%, 50=1.14% 00:17:07.892 cpu : usr=1.20%, sys=1.80%, ctx=704, majf=0, minf=1 00:17:07.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.892 issued rwts: total=192,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.892 job2: (groupid=0, jobs=1): err= 0: pid=2159675: Mon Jul 15 21:32:57 2024 00:17:07.892 read: IOPS=376, BW=1506KiB/s (1543kB/s)(1508KiB/1001msec) 00:17:07.892 slat (nsec): min=7838, max=63789, avg=25509.10, stdev=5014.08 00:17:07.892 clat (usec): min=965, max=42163, avg=1497.47, stdev=2965.95 00:17:07.892 lat (usec): min=990, max=42188, avg=1522.98, stdev=2965.92 00:17:07.892 clat percentiles (usec): 00:17:07.892 | 1.00th=[ 1045], 5.00th=[ 1139], 10.00th=[ 1188], 20.00th=[ 1221], 00:17:07.892 | 30.00th=[ 1237], 40.00th=[ 1254], 50.00th=[ 1270], 60.00th=[ 1287], 00:17:07.892 | 70.00th=[ 1319], 80.00th=[ 1336], 90.00th=[ 1385], 95.00th=[ 1450], 00:17:07.892 | 99.00th=[ 1598], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:17:07.892 | 99.99th=[42206] 00:17:07.892 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:07.892 slat (nsec): min=9513, max=48747, avg=29080.04, stdev=6516.52 00:17:07.892 clat (usec): min=440, max=1244, avg=790.06, stdev=122.08 00:17:07.892 lat (usec): min=450, max=1275, avg=819.14, stdev=123.51 00:17:07.892 clat percentiles (usec): 00:17:07.892 | 1.00th=[ 490], 5.00th=[ 570], 10.00th=[ 635], 20.00th=[ 685], 00:17:07.892 | 30.00th=[ 734], 40.00th=[ 766], 50.00th=[ 799], 60.00th=[ 824], 00:17:07.892 | 70.00th=[ 857], 80.00th=[ 889], 90.00th=[ 938], 95.00th=[ 971], 00:17:07.892 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1237], 99.95th=[ 1237], 00:17:07.892 | 99.99th=[ 1237] 00:17:07.892 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:17:07.892 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:07.892 lat (usec) : 500=0.67%, 750=18.79%, 1000=36.78% 00:17:07.892 lat (msec) : 2=43.53%, 50=0.22% 00:17:07.892 cpu : usr=0.70%, sys=3.20%, ctx=889, majf=0, minf=1 00:17:07.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.892 issued rwts: total=377,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.892 job3: (groupid=0, jobs=1): err= 0: pid=2159676: Mon Jul 15 21:32:57 2024 00:17:07.892 read: IOPS=83, BW=336KiB/s (344kB/s)(336KiB/1001msec) 00:17:07.892 slat (nsec): min=8600, max=60442, avg=25405.00, stdev=4281.23 00:17:07.892 clat (usec): min=1139, max=42037, avg=6752.75, stdev=13769.27 00:17:07.892 lat (usec): min=1164, max=42062, avg=6778.16, stdev=13769.19 00:17:07.892 clat percentiles (usec): 00:17:07.892 | 1.00th=[ 1139], 5.00th=[ 1270], 10.00th=[ 1303], 20.00th=[ 1319], 00:17:07.892 | 30.00th=[ 1336], 40.00th=[ 1352], 50.00th=[ 1352], 60.00th=[ 1385], 00:17:07.892 | 70.00th=[ 1401], 80.00th=[ 1450], 90.00th=[41681], 95.00th=[42206], 00:17:07.892 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:07.892 | 99.99th=[42206] 00:17:07.892 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:07.892 slat (nsec): min=9813, max=64251, avg=30042.30, stdev=7806.34 00:17:07.892 clat (usec): min=414, max=1061, avg=795.16, stdev=109.47 00:17:07.892 lat (usec): min=425, max=1092, avg=825.20, stdev=112.14 00:17:07.892 clat percentiles (usec): 00:17:07.892 | 1.00th=[ 478], 5.00th=[ 603], 10.00th=[ 660], 20.00th=[ 701], 00:17:07.892 | 30.00th=[ 750], 40.00th=[ 783], 50.00th=[ 807], 60.00th=[ 832], 00:17:07.892 | 70.00th=[ 857], 80.00th=[ 881], 90.00th=[ 922], 95.00th=[ 955], 00:17:07.892 | 99.00th=[ 1012], 99.50th=[ 1037], 99.90th=[ 1057], 99.95th=[ 1057], 00:17:07.892 | 99.99th=[ 1057] 00:17:07.892 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:17:07.892 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:07.892 lat (usec) : 500=1.17%, 750=25.50%, 1000=57.72% 00:17:07.892 lat (msec) : 2=13.59%, 10=0.17%, 50=1.85% 00:17:07.892 cpu : usr=1.10%, sys=1.50%, ctx=597, majf=0, minf=1 00:17:07.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.892 issued rwts: total=84,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.892 00:17:07.892 Run status group 0 (all jobs): 00:17:07.892 READ: bw=2588KiB/s (2650kB/s), 69.4KiB/s-1506KiB/s (71.1kB/s-1543kB/s), io=2684KiB (2748kB), run=1001-1037msec 00:17:07.892 WRITE: bw=7900KiB/s (8089kB/s), 1975KiB/s-2046KiB/s (2022kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1037msec 00:17:07.892 00:17:07.892 Disk stats (read/write): 00:17:07.892 nvme0n1: ios=44/512, merge=0/0, ticks=1540/351, in_queue=1891, util=98.20% 00:17:07.892 nvme0n2: ios=144/512, merge=0/0, ticks=502/381, in_queue=883, util=91.85% 00:17:07.892 nvme0n3: ios=278/512, merge=0/0, ticks=628/391, in_queue=1019, util=91.06% 00:17:07.892 nvme0n4: ios=43/512, merge=0/0, ticks=1032/374, in_queue=1406, util=98.09% 00:17:07.892 21:32:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:07.892 [global] 00:17:07.892 thread=1 00:17:07.892 invalidate=1 00:17:07.892 rw=write 00:17:07.892 time_based=1 00:17:07.892 runtime=1 00:17:07.892 ioengine=libaio 00:17:07.892 direct=1 00:17:07.892 bs=4096 00:17:07.892 iodepth=128 00:17:07.892 norandommap=0 00:17:07.892 numjobs=1 00:17:07.892 00:17:07.892 verify_dump=1 00:17:07.892 verify_backlog=512 00:17:07.892 verify_state_save=0 00:17:07.892 do_verify=1 00:17:07.892 verify=crc32c-intel 00:17:07.892 [job0] 00:17:07.892 filename=/dev/nvme0n1 00:17:07.892 [job1] 00:17:07.892 filename=/dev/nvme0n2 00:17:07.892 [job2] 00:17:07.892 filename=/dev/nvme0n3 00:17:07.892 [job3] 00:17:07.892 filename=/dev/nvme0n4 00:17:07.892 Could not set queue depth (nvme0n1) 00:17:07.892 Could not set queue depth (nvme0n2) 00:17:07.892 Could not set queue depth (nvme0n3) 00:17:07.892 Could not set queue depth (nvme0n4) 00:17:08.154 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:08.154 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:08.154 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:08.154 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:08.154 fio-3.35 00:17:08.154 Starting 4 threads 00:17:09.566 00:17:09.566 job0: (groupid=0, jobs=1): err= 0: pid=2160200: Mon Jul 15 21:32:59 2024 00:17:09.566 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:17:09.566 slat (nsec): min=900, max=14128k, avg=85651.58, stdev=670695.13 00:17:09.566 clat (usec): min=1647, max=44645, avg=12363.88, stdev=6895.18 00:17:09.566 lat (usec): min=1656, max=44652, avg=12449.54, stdev=6925.33 00:17:09.566 clat percentiles (usec): 00:17:09.566 | 1.00th=[ 3556], 5.00th=[ 5538], 10.00th=[ 6128], 20.00th=[ 7570], 00:17:09.566 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[11076], 00:17:09.566 | 70.00th=[13304], 80.00th=[15664], 90.00th=[22152], 95.00th=[24511], 00:17:09.566 | 99.00th=[41681], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:17:09.566 | 99.99th=[44827] 00:17:09.566 write: IOPS=5979, BW=23.4MiB/s (24.5MB/s)(23.5MiB/1007msec); 0 zone resets 00:17:09.566 slat (nsec): min=1573, max=11429k, avg=62893.95, stdev=383979.82 00:17:09.567 clat (usec): min=1280, max=44633, avg=9626.79, stdev=5081.55 00:17:09.567 lat (usec): min=1290, max=44640, avg=9689.68, stdev=5104.51 00:17:09.567 clat percentiles (usec): 00:17:09.567 | 1.00th=[ 1811], 5.00th=[ 2966], 10.00th=[ 3818], 20.00th=[ 5473], 00:17:09.567 | 30.00th=[ 6456], 40.00th=[ 8225], 50.00th=[ 9372], 60.00th=[10028], 00:17:09.567 | 70.00th=[10945], 80.00th=[12911], 90.00th=[15533], 95.00th=[20055], 00:17:09.567 | 99.00th=[27395], 99.50th=[31327], 99.90th=[34341], 99.95th=[34341], 00:17:09.567 | 99.99th=[44827] 00:17:09.567 bw ( KiB/s): min=23072, max=24576, per=26.01%, avg=23824.00, stdev=1063.49, samples=2 00:17:09.567 iops : min= 5768, max= 6144, avg=5956.00, stdev=265.87, samples=2 00:17:09.567 lat (msec) : 2=0.74%, 4=5.61%, 10=47.03%, 20=38.35%, 50=8.27% 00:17:09.567 cpu : usr=4.17%, sys=6.26%, ctx=624, majf=0, minf=1 00:17:09.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:09.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:09.567 issued rwts: total=5632,6021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.567 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:09.567 job1: (groupid=0, jobs=1): err= 0: pid=2160201: Mon Jul 15 21:32:59 2024 00:17:09.567 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:17:09.567 slat (nsec): min=884, max=14143k, avg=98623.11, stdev=680006.66 00:17:09.567 clat (usec): min=2377, max=54691, avg=13908.61, stdev=8564.16 00:17:09.567 lat (usec): min=2381, max=56075, avg=14007.23, stdev=8629.18 00:17:09.567 clat percentiles (usec): 00:17:09.567 | 1.00th=[ 5604], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[ 9110], 00:17:09.567 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[11207], 00:17:09.567 | 70.00th=[12911], 80.00th=[15401], 90.00th=[28967], 95.00th=[35390], 00:17:09.567 | 99.00th=[41157], 99.50th=[42206], 99.90th=[54264], 99.95th=[54264], 00:17:09.567 | 99.99th=[54789] 00:17:09.567 write: IOPS=4931, BW=19.3MiB/s (20.2MB/s)(19.4MiB/1007msec); 0 zone resets 00:17:09.567 slat (nsec): min=1537, max=9921.9k, avg=99206.73, stdev=574942.06 00:17:09.567 clat (usec): min=1297, max=73201, avg=12772.05, stdev=10857.87 00:17:09.567 lat (usec): min=1308, max=73206, avg=12871.26, stdev=10929.48 00:17:09.567 clat percentiles (usec): 00:17:09.567 | 1.00th=[ 4015], 5.00th=[ 5800], 10.00th=[ 6980], 20.00th=[ 8225], 00:17:09.567 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10290], 00:17:09.567 | 70.00th=[11207], 80.00th=[13698], 90.00th=[17957], 95.00th=[30802], 00:17:09.567 | 99.00th=[66323], 99.50th=[67634], 99.90th=[72877], 99.95th=[72877], 00:17:09.567 | 99.99th=[72877] 00:17:09.567 bw ( KiB/s): min=17600, max=21104, per=21.13%, avg=19352.00, stdev=2477.70, samples=2 00:17:09.567 iops : min= 4400, max= 5276, avg=4838.00, stdev=619.43, samples=2 00:17:09.567 lat (msec) : 2=0.21%, 4=0.74%, 10=47.94%, 20=39.59%, 50=9.59% 00:17:09.567 lat (msec) : 100=1.93% 00:17:09.567 cpu : usr=3.78%, sys=3.68%, ctx=452, majf=0, minf=1 00:17:09.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:09.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:09.567 issued rwts: total=4608,4966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.567 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:09.567 job2: (groupid=0, jobs=1): err= 0: pid=2160202: Mon Jul 15 21:32:59 2024 00:17:09.567 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:17:09.567 slat (nsec): min=886, max=14620k, avg=86563.85, stdev=682430.53 00:17:09.567 clat (usec): min=2969, max=55769, avg=11701.64, stdev=5762.05 00:17:09.567 lat (usec): min=3005, max=55774, avg=11788.20, stdev=5796.22 00:17:09.567 clat percentiles (usec): 00:17:09.567 | 1.00th=[ 4817], 5.00th=[ 5932], 10.00th=[ 6849], 20.00th=[ 8356], 00:17:09.567 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[11076], 00:17:09.567 | 70.00th=[12387], 80.00th=[13960], 90.00th=[17695], 95.00th=[25560], 00:17:09.567 | 99.00th=[30016], 99.50th=[47449], 99.90th=[51643], 99.95th=[51643], 00:17:09.567 | 99.99th=[55837] 00:17:09.567 write: IOPS=5883, BW=23.0MiB/s (24.1MB/s)(23.1MiB/1007msec); 0 zone resets 00:17:09.567 slat (nsec): min=1525, max=18548k, avg=78375.65, stdev=540603.80 00:17:09.567 clat (usec): min=998, max=49898, avg=10444.42, stdev=5811.74 00:17:09.567 lat (usec): min=1006, max=49908, avg=10522.80, stdev=5838.40 00:17:09.567 clat percentiles (usec): 00:17:09.567 | 1.00th=[ 2245], 5.00th=[ 5145], 10.00th=[ 6587], 20.00th=[ 7767], 00:17:09.567 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10159], 00:17:09.567 | 70.00th=[10814], 80.00th=[11469], 90.00th=[13435], 95.00th=[16581], 00:17:09.567 | 99.00th=[46400], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:17:09.567 | 99.99th=[50070] 00:17:09.567 bw ( KiB/s): min=21800, max=24576, per=25.32%, avg=23188.00, stdev=1962.93, samples=2 00:17:09.567 iops : min= 5450, max= 6144, avg=5797.00, stdev=490.73, samples=2 00:17:09.567 lat (usec) : 1000=0.02% 00:17:09.567 lat (msec) : 2=0.33%, 4=1.50%, 10=52.20%, 20=40.64%, 50=5.17% 00:17:09.567 lat (msec) : 100=0.14% 00:17:09.567 cpu : usr=3.48%, sys=5.57%, ctx=462, majf=0, minf=1 00:17:09.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:09.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:09.567 issued rwts: total=5632,5925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.567 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:09.567 job3: (groupid=0, jobs=1): err= 0: pid=2160203: Mon Jul 15 21:32:59 2024 00:17:09.567 read: IOPS=6036, BW=23.6MiB/s (24.7MB/s)(23.7MiB/1003msec) 00:17:09.567 slat (nsec): min=949, max=10518k, avg=71801.92, stdev=594217.41 00:17:09.567 clat (usec): min=1014, max=22553, avg=10317.16, stdev=3527.96 00:17:09.567 lat (usec): min=1568, max=22578, avg=10388.97, stdev=3569.71 00:17:09.567 clat percentiles (usec): 00:17:09.567 | 1.00th=[ 2933], 5.00th=[ 4047], 10.00th=[ 5276], 20.00th=[ 7701], 00:17:09.567 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[10945], 00:17:09.567 | 70.00th=[11994], 80.00th=[13173], 90.00th=[14877], 95.00th=[16581], 00:17:09.567 | 99.00th=[18220], 99.50th=[18744], 99.90th=[21627], 99.95th=[21890], 00:17:09.567 | 99.99th=[22676] 00:17:09.567 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:17:09.567 slat (nsec): min=1624, max=20085k, avg=68505.69, stdev=544587.29 00:17:09.567 clat (usec): min=449, max=46417, avg=10070.29, stdev=6218.29 00:17:09.567 lat (usec): min=483, max=46419, avg=10138.79, stdev=6242.59 00:17:09.567 clat percentiles (usec): 00:17:09.567 | 1.00th=[ 1369], 5.00th=[ 2933], 10.00th=[ 4555], 20.00th=[ 5932], 00:17:09.567 | 30.00th=[ 6718], 40.00th=[ 7635], 50.00th=[ 8848], 60.00th=[ 9765], 00:17:09.567 | 70.00th=[10945], 80.00th=[13173], 90.00th=[17695], 95.00th=[21365], 00:17:09.567 | 99.00th=[37487], 99.50th=[44303], 99.90th=[45876], 99.95th=[45876], 00:17:09.567 | 99.99th=[46400] 00:17:09.567 bw ( KiB/s): min=24576, max=24576, per=26.83%, avg=24576.00, stdev= 0.00, samples=2 00:17:09.567 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:17:09.567 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.12% 00:17:09.567 lat (msec) : 2=1.24%, 4=5.16%, 10=47.79%, 20=41.86%, 50=3.80% 00:17:09.567 cpu : usr=5.39%, sys=6.29%, ctx=403, majf=0, minf=1 00:17:09.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:09.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:09.567 issued rwts: total=6055,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.567 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:09.567 00:17:09.567 Run status group 0 (all jobs): 00:17:09.567 READ: bw=85.1MiB/s (89.2MB/s), 17.9MiB/s-23.6MiB/s (18.7MB/s-24.7MB/s), io=85.7MiB (89.8MB), run=1003-1007msec 00:17:09.567 WRITE: bw=89.4MiB/s (93.8MB/s), 19.3MiB/s-23.9MiB/s (20.2MB/s-25.1MB/s), io=90.1MiB (94.4MB), run=1003-1007msec 00:17:09.567 00:17:09.567 Disk stats (read/write): 00:17:09.567 nvme0n1: ios=4646/5069, merge=0/0, ticks=53187/44938, in_queue=98125, util=96.69% 00:17:09.567 nvme0n2: ios=4147/4540, merge=0/0, ticks=27674/22200, in_queue=49874, util=89.91% 00:17:09.567 nvme0n3: ios=4664/4836, merge=0/0, ticks=30754/25479, in_queue=56233, util=90.62% 00:17:09.567 nvme0n4: ios=4715/5120, merge=0/0, ticks=50104/49444, in_queue=99548, util=96.48% 00:17:09.567 21:32:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:09.567 [global] 00:17:09.568 thread=1 00:17:09.568 invalidate=1 00:17:09.568 rw=randwrite 00:17:09.568 time_based=1 00:17:09.568 runtime=1 00:17:09.568 ioengine=libaio 00:17:09.568 direct=1 00:17:09.568 bs=4096 00:17:09.568 iodepth=128 00:17:09.568 norandommap=0 00:17:09.568 numjobs=1 00:17:09.568 00:17:09.568 verify_dump=1 00:17:09.568 verify_backlog=512 00:17:09.568 verify_state_save=0 00:17:09.568 do_verify=1 00:17:09.568 verify=crc32c-intel 00:17:09.568 [job0] 00:17:09.568 filename=/dev/nvme0n1 00:17:09.568 [job1] 00:17:09.568 filename=/dev/nvme0n2 00:17:09.568 [job2] 00:17:09.568 filename=/dev/nvme0n3 00:17:09.568 [job3] 00:17:09.568 filename=/dev/nvme0n4 00:17:09.568 Could not set queue depth (nvme0n1) 00:17:09.568 Could not set queue depth (nvme0n2) 00:17:09.568 Could not set queue depth (nvme0n3) 00:17:09.568 Could not set queue depth (nvme0n4) 00:17:09.832 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.832 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.832 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.832 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:09.833 fio-3.35 00:17:09.833 Starting 4 threads 00:17:11.259 00:17:11.259 job0: (groupid=0, jobs=1): err= 0: pid=2160718: Mon Jul 15 21:33:00 2024 00:17:11.259 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:17:11.259 slat (nsec): min=918, max=17211k, avg=104907.74, stdev=802051.89 00:17:11.259 clat (usec): min=1990, max=59285, avg=14318.46, stdev=9365.76 00:17:11.259 lat (usec): min=2015, max=59289, avg=14423.37, stdev=9424.11 00:17:11.259 clat percentiles (usec): 00:17:11.259 | 1.00th=[ 2212], 5.00th=[ 4817], 10.00th=[ 7111], 20.00th=[ 8979], 00:17:11.259 | 30.00th=[ 9896], 40.00th=[10945], 50.00th=[11469], 60.00th=[13042], 00:17:11.259 | 70.00th=[14746], 80.00th=[17695], 90.00th=[25822], 95.00th=[28181], 00:17:11.259 | 99.00th=[57934], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:17:11.259 | 99.99th=[59507] 00:17:11.259 write: IOPS=4649, BW=18.2MiB/s (19.0MB/s)(18.3MiB/1005msec); 0 zone resets 00:17:11.259 slat (nsec): min=1520, max=13190k, avg=94678.04, stdev=686092.07 00:17:11.259 clat (usec): min=1044, max=40992, avg=13116.65, stdev=8078.32 00:17:11.259 lat (usec): min=1051, max=41000, avg=13211.33, stdev=8128.18 00:17:11.259 clat percentiles (usec): 00:17:11.259 | 1.00th=[ 1778], 5.00th=[ 3851], 10.00th=[ 5735], 20.00th=[ 7308], 00:17:11.259 | 30.00th=[ 8455], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[11994], 00:17:11.259 | 70.00th=[14091], 80.00th=[18482], 90.00th=[27657], 95.00th=[31065], 00:17:11.259 | 99.00th=[36963], 99.50th=[36963], 99.90th=[41157], 99.95th=[41157], 00:17:11.259 | 99.99th=[41157] 00:17:11.259 bw ( KiB/s): min=12288, max=24576, per=20.54%, avg=18432.00, stdev=8688.93, samples=2 00:17:11.259 iops : min= 3072, max= 6144, avg=4608.00, stdev=2172.23, samples=2 00:17:11.259 lat (msec) : 2=0.68%, 4=3.63%, 10=34.96%, 20=45.25%, 50=14.79% 00:17:11.259 lat (msec) : 100=0.68% 00:17:11.259 cpu : usr=3.29%, sys=5.38%, ctx=361, majf=0, minf=1 00:17:11.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:11.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:11.259 issued rwts: total=4608,4673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:11.259 job1: (groupid=0, jobs=1): err= 0: pid=2160719: Mon Jul 15 21:33:00 2024 00:17:11.259 read: IOPS=8352, BW=32.6MiB/s (34.2MB/s)(32.8MiB/1005msec) 00:17:11.259 slat (nsec): min=901, max=11947k, avg=58762.41, stdev=422343.42 00:17:11.259 clat (usec): min=3332, max=27964, avg=7651.46, stdev=3247.68 00:17:11.259 lat (usec): min=3334, max=27967, avg=7710.23, stdev=3267.21 00:17:11.259 clat percentiles (usec): 00:17:11.259 | 1.00th=[ 4015], 5.00th=[ 4817], 10.00th=[ 5276], 20.00th=[ 5669], 00:17:11.259 | 30.00th=[ 5932], 40.00th=[ 6325], 50.00th=[ 6783], 60.00th=[ 7439], 00:17:11.259 | 70.00th=[ 8094], 80.00th=[ 8717], 90.00th=[10159], 95.00th=[14877], 00:17:11.259 | 99.00th=[25297], 99.50th=[27657], 99.90th=[27657], 99.95th=[27657], 00:17:11.259 | 99.99th=[27919] 00:17:11.259 write: IOPS=8660, BW=33.8MiB/s (35.5MB/s)(34.0MiB/1005msec); 0 zone resets 00:17:11.259 slat (nsec): min=1556, max=15391k, avg=54010.82, stdev=422355.96 00:17:11.259 clat (usec): min=1249, max=36846, avg=7258.21, stdev=4713.75 00:17:11.259 lat (usec): min=1258, max=36849, avg=7312.22, stdev=4738.55 00:17:11.259 clat percentiles (usec): 00:17:11.259 | 1.00th=[ 2343], 5.00th=[ 3064], 10.00th=[ 3752], 20.00th=[ 4752], 00:17:11.259 | 30.00th=[ 5276], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 6128], 00:17:11.259 | 70.00th=[ 7111], 80.00th=[ 8717], 90.00th=[11863], 95.00th=[15533], 00:17:11.259 | 99.00th=[30802], 99.50th=[33162], 99.90th=[36963], 99.95th=[36963], 00:17:11.259 | 99.99th=[36963] 00:17:11.259 bw ( KiB/s): min=24920, max=44712, per=38.79%, avg=34816.00, stdev=13995.06, samples=2 00:17:11.259 iops : min= 6230, max=11178, avg=8704.00, stdev=3498.76, samples=2 00:17:11.259 lat (msec) : 2=0.13%, 4=6.45%, 10=80.28%, 20=10.81%, 50=2.33% 00:17:11.259 cpu : usr=3.49%, sys=6.27%, ctx=776, majf=0, minf=1 00:17:11.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:11.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:11.260 issued rwts: total=8394,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:11.260 job2: (groupid=0, jobs=1): err= 0: pid=2160720: Mon Jul 15 21:33:00 2024 00:17:11.260 read: IOPS=4706, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1006msec) 00:17:11.260 slat (nsec): min=939, max=25934k, avg=118654.20, stdev=836836.79 00:17:11.260 clat (usec): min=1809, max=50850, avg=14794.37, stdev=7724.99 00:17:11.260 lat (usec): min=5420, max=50862, avg=14913.03, stdev=7776.99 00:17:11.260 clat percentiles (usec): 00:17:11.260 | 1.00th=[ 6915], 5.00th=[ 7373], 10.00th=[ 7635], 20.00th=[ 8356], 00:17:11.260 | 30.00th=[10028], 40.00th=[11076], 50.00th=[12387], 60.00th=[14091], 00:17:11.260 | 70.00th=[16909], 80.00th=[19530], 90.00th=[25035], 95.00th=[28967], 00:17:11.260 | 99.00th=[43254], 99.50th=[47449], 99.90th=[47449], 99.95th=[47973], 00:17:11.260 | 99.99th=[50594] 00:17:11.260 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:17:11.260 slat (nsec): min=1552, max=6419.3k, avg=80391.84, stdev=450279.19 00:17:11.260 clat (usec): min=1209, max=28854, avg=11237.50, stdev=4788.65 00:17:11.260 lat (usec): min=1221, max=28857, avg=11317.90, stdev=4805.92 00:17:11.260 clat percentiles (usec): 00:17:11.260 | 1.00th=[ 3785], 5.00th=[ 5473], 10.00th=[ 6325], 20.00th=[ 7177], 00:17:11.260 | 30.00th=[ 8160], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[11207], 00:17:11.260 | 70.00th=[12649], 80.00th=[14877], 90.00th=[18744], 95.00th=[21103], 00:17:11.260 | 99.00th=[25822], 99.50th=[27395], 99.90th=[27395], 99.95th=[27395], 00:17:11.260 | 99.99th=[28967] 00:17:11.260 bw ( KiB/s): min=19256, max=21696, per=22.82%, avg=20476.00, stdev=1725.34, samples=2 00:17:11.260 iops : min= 4814, max= 5424, avg=5119.00, stdev=431.34, samples=2 00:17:11.260 lat (msec) : 2=0.03%, 4=0.55%, 10=38.76%, 20=47.83%, 50=12.81% 00:17:11.260 lat (msec) : 100=0.02% 00:17:11.260 cpu : usr=3.48%, sys=4.68%, ctx=460, majf=0, minf=1 00:17:11.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:11.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:11.260 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:11.260 job3: (groupid=0, jobs=1): err= 0: pid=2160721: Mon Jul 15 21:33:00 2024 00:17:11.260 read: IOPS=3750, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1007msec) 00:17:11.260 slat (nsec): min=906, max=44780k, avg=165128.19, stdev=1305416.76 00:17:11.260 clat (usec): min=1268, max=66407, avg=20794.06, stdev=15000.77 00:17:11.260 lat (usec): min=4470, max=66411, avg=20959.18, stdev=15062.53 00:17:11.260 clat percentiles (usec): 00:17:11.260 | 1.00th=[ 5604], 5.00th=[ 7963], 10.00th=[ 8586], 20.00th=[ 9503], 00:17:11.260 | 30.00th=[10421], 40.00th=[12125], 50.00th=[13698], 60.00th=[17957], 00:17:11.260 | 70.00th=[22414], 80.00th=[32113], 90.00th=[45876], 95.00th=[52691], 00:17:11.260 | 99.00th=[61604], 99.50th=[66323], 99.90th=[66323], 99.95th=[66323], 00:17:11.260 | 99.99th=[66323] 00:17:11.260 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:17:11.260 slat (nsec): min=1540, max=8473.0k, avg=85140.47, stdev=485102.73 00:17:11.260 clat (usec): min=1238, max=56804, avg=11994.02, stdev=5411.76 00:17:11.260 lat (usec): min=1247, max=56817, avg=12079.16, stdev=5422.89 00:17:11.260 clat percentiles (usec): 00:17:11.260 | 1.00th=[ 2147], 5.00th=[ 5538], 10.00th=[ 6980], 20.00th=[ 8586], 00:17:11.260 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[11076], 60.00th=[12387], 00:17:11.260 | 70.00th=[13042], 80.00th=[14091], 90.00th=[17171], 95.00th=[22676], 00:17:11.260 | 99.00th=[33817], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:17:11.260 | 99.99th=[56886] 00:17:11.260 bw ( KiB/s): min=12288, max=20480, per=18.26%, avg=16384.00, stdev=5792.62, samples=2 00:17:11.260 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:17:11.260 lat (msec) : 2=0.43%, 4=0.89%, 10=30.41%, 20=47.48%, 50=16.28% 00:17:11.260 lat (msec) : 100=4.51% 00:17:11.260 cpu : usr=2.39%, sys=3.88%, ctx=367, majf=0, minf=1 00:17:11.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:11.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:11.260 issued rwts: total=3777,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:11.260 00:17:11.260 Run status group 0 (all jobs): 00:17:11.260 READ: bw=83.5MiB/s (87.5MB/s), 14.7MiB/s-32.6MiB/s (15.4MB/s-34.2MB/s), io=84.0MiB (88.1MB), run=1005-1007msec 00:17:11.260 WRITE: bw=87.6MiB/s (91.9MB/s), 15.9MiB/s-33.8MiB/s (16.7MB/s-35.5MB/s), io=88.3MiB (92.5MB), run=1005-1007msec 00:17:11.260 00:17:11.260 Disk stats (read/write): 00:17:11.260 nvme0n1: ios=4144/4594, merge=0/0, ticks=30816/34482, in_queue=65298, util=84.47% 00:17:11.260 nvme0n2: ios=6699/6748, merge=0/0, ticks=39665/37253, in_queue=76918, util=85.73% 00:17:11.260 nvme0n3: ios=3855/4096, merge=0/0, ticks=31892/28334, in_queue=60226, util=94.42% 00:17:11.260 nvme0n4: ios=3384/3584, merge=0/0, ticks=23265/15759, in_queue=39024, util=93.28% 00:17:11.260 21:33:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:11.260 21:33:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2161092 00:17:11.260 21:33:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:11.260 21:33:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:11.260 [global] 00:17:11.260 thread=1 00:17:11.260 invalidate=1 00:17:11.260 rw=read 00:17:11.260 time_based=1 00:17:11.260 runtime=10 00:17:11.260 ioengine=libaio 00:17:11.260 direct=1 00:17:11.260 bs=4096 00:17:11.260 iodepth=1 00:17:11.260 norandommap=1 00:17:11.260 numjobs=1 00:17:11.260 00:17:11.260 [job0] 00:17:11.260 filename=/dev/nvme0n1 00:17:11.260 [job1] 00:17:11.260 filename=/dev/nvme0n2 00:17:11.260 [job2] 00:17:11.260 filename=/dev/nvme0n3 00:17:11.260 [job3] 00:17:11.260 filename=/dev/nvme0n4 00:17:11.260 Could not set queue depth (nvme0n1) 00:17:11.260 Could not set queue depth (nvme0n2) 00:17:11.260 Could not set queue depth (nvme0n3) 00:17:11.260 Could not set queue depth (nvme0n4) 00:17:11.525 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:11.525 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:11.525 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:11.525 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:11.525 fio-3.35 00:17:11.525 Starting 4 threads 00:17:14.084 21:33:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:14.344 21:33:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:14.344 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=5464064, buflen=4096 00:17:14.344 fio: pid=2161307, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:14.344 21:33:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:14.344 21:33:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:14.344 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=5824512, buflen=4096 00:17:14.344 fio: pid=2161306, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:14.604 21:33:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:14.604 21:33:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:14.604 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=466944, buflen=4096 00:17:14.604 fio: pid=2161304, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:14.604 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=8683520, buflen=4096 00:17:14.604 fio: pid=2161305, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:14.864 21:33:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:14.864 21:33:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:14.864 00:17:14.864 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2161304: Mon Jul 15 21:33:04 2024 00:17:14.864 read: IOPS=39, BW=155KiB/s (159kB/s)(456KiB/2945msec) 00:17:14.864 slat (usec): min=11, max=11677, avg=126.19, stdev=1086.58 00:17:14.864 clat (usec): min=772, max=42933, avg=25513.28, stdev=20225.57 00:17:14.864 lat (usec): min=798, max=54017, avg=25640.35, stdev=20340.53 00:17:14.864 clat percentiles (usec): 00:17:14.864 | 1.00th=[ 783], 5.00th=[ 889], 10.00th=[ 914], 20.00th=[ 979], 00:17:14.864 | 30.00th=[ 1090], 40.00th=[ 1631], 50.00th=[41681], 60.00th=[42206], 00:17:14.864 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:17:14.864 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:14.864 | 99.99th=[42730] 00:17:14.864 bw ( KiB/s): min= 152, max= 176, per=2.50%, avg=161.60, stdev=10.43, samples=5 00:17:14.864 iops : min= 38, max= 44, avg=40.40, stdev= 2.61, samples=5 00:17:14.864 lat (usec) : 1000=22.61% 00:17:14.864 lat (msec) : 2=17.39%, 50=59.13% 00:17:14.864 cpu : usr=0.17%, sys=0.00%, ctx=116, majf=0, minf=1 00:17:14.864 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.864 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.864 issued rwts: total=115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.864 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.864 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2161305: Mon Jul 15 21:33:04 2024 00:17:14.864 read: IOPS=683, BW=2734KiB/s (2799kB/s)(8480KiB/3102msec) 00:17:14.864 slat (usec): min=6, max=25154, avg=58.02, stdev=719.26 00:17:14.864 clat (usec): min=616, max=42104, avg=1388.65, stdev=2640.35 00:17:14.864 lat (usec): min=641, max=42133, avg=1446.68, stdev=2739.36 00:17:14.865 clat percentiles (usec): 00:17:14.865 | 1.00th=[ 914], 5.00th=[ 971], 10.00th=[ 1045], 20.00th=[ 1123], 00:17:14.865 | 30.00th=[ 1172], 40.00th=[ 1205], 50.00th=[ 1237], 60.00th=[ 1254], 00:17:14.865 | 70.00th=[ 1287], 80.00th=[ 1303], 90.00th=[ 1352], 95.00th=[ 1385], 00:17:14.865 | 99.00th=[ 1483], 99.50th=[ 4228], 99.90th=[41681], 99.95th=[41681], 00:17:14.865 | 99.99th=[42206] 00:17:14.865 bw ( KiB/s): min= 1552, max= 3248, per=42.66%, avg=2745.00, stdev=647.54, samples=6 00:17:14.865 iops : min= 388, max= 812, avg=686.17, stdev=161.86, samples=6 00:17:14.865 lat (usec) : 750=0.09%, 1000=6.03% 00:17:14.865 lat (msec) : 2=93.31%, 10=0.09%, 50=0.42% 00:17:14.865 cpu : usr=0.61%, sys=2.13%, ctx=2127, majf=0, minf=1 00:17:14.865 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.865 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.865 issued rwts: total=2121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.865 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.865 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2161306: Mon Jul 15 21:33:04 2024 00:17:14.865 read: IOPS=511, BW=2046KiB/s (2095kB/s)(5688KiB/2780msec) 00:17:14.865 slat (usec): min=7, max=12457, avg=34.97, stdev=329.55 00:17:14.865 clat (usec): min=728, max=42200, avg=1894.57, stdev=5373.43 00:17:14.865 lat (usec): min=753, max=53948, avg=1929.55, stdev=5447.57 00:17:14.865 clat percentiles (usec): 00:17:14.865 | 1.00th=[ 840], 5.00th=[ 914], 10.00th=[ 963], 20.00th=[ 1029], 00:17:14.865 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1221], 00:17:14.865 | 70.00th=[ 1237], 80.00th=[ 1287], 90.00th=[ 1336], 95.00th=[ 1369], 00:17:14.865 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:14.865 | 99.99th=[42206] 00:17:14.865 bw ( KiB/s): min= 96, max= 3272, per=33.96%, avg=2185.60, stdev=1290.92, samples=5 00:17:14.865 iops : min= 24, max= 818, avg=546.40, stdev=322.73, samples=5 00:17:14.865 lat (usec) : 750=0.21%, 1000=15.74% 00:17:14.865 lat (msec) : 2=82.08%, 10=0.07%, 20=0.07%, 50=1.76% 00:17:14.865 cpu : usr=0.94%, sys=1.80%, ctx=1425, majf=0, minf=1 00:17:14.865 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.865 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.865 issued rwts: total=1423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.865 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.865 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2161307: Mon Jul 15 21:33:04 2024 00:17:14.865 read: IOPS=512, BW=2050KiB/s (2099kB/s)(5336KiB/2603msec) 00:17:14.865 slat (nsec): min=23553, max=69954, avg=25064.51, stdev=3662.34 00:17:14.865 clat (usec): min=892, max=42108, avg=1903.96, stdev=4914.30 00:17:14.865 lat (usec): min=917, max=42133, avg=1929.02, stdev=4914.71 00:17:14.865 clat percentiles (usec): 00:17:14.865 | 1.00th=[ 1037], 5.00th=[ 1139], 10.00th=[ 1172], 20.00th=[ 1221], 00:17:14.865 | 30.00th=[ 1254], 40.00th=[ 1270], 50.00th=[ 1287], 60.00th=[ 1319], 00:17:14.865 | 70.00th=[ 1336], 80.00th=[ 1369], 90.00th=[ 1434], 95.00th=[ 1532], 00:17:14.865 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:17:14.865 | 99.99th=[42206] 00:17:14.865 bw ( KiB/s): min= 144, max= 3080, per=33.12%, avg=2131.20, stdev=1291.18, samples=5 00:17:14.865 iops : min= 36, max= 770, avg=532.80, stdev=322.80, samples=5 00:17:14.865 lat (usec) : 1000=0.67% 00:17:14.865 lat (msec) : 2=97.75%, 50=1.50% 00:17:14.865 cpu : usr=0.61%, sys=1.46%, ctx=1336, majf=0, minf=2 00:17:14.865 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.865 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.865 issued rwts: total=1335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.865 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.865 00:17:14.865 Run status group 0 (all jobs): 00:17:14.865 READ: bw=6435KiB/s (6589kB/s), 155KiB/s-2734KiB/s (159kB/s-2799kB/s), io=19.5MiB (20.4MB), run=2603-3102msec 00:17:14.865 00:17:14.865 Disk stats (read/write): 00:17:14.865 nvme0n1: ios=131/0, merge=0/0, ticks=2866/0, in_queue=2866, util=94.92% 00:17:14.865 nvme0n2: ios=2120/0, merge=0/0, ticks=2888/0, in_queue=2888, util=93.65% 00:17:14.865 nvme0n3: ios=1418/0, merge=0/0, ticks=2382/0, in_queue=2382, util=96.03% 00:17:14.865 nvme0n4: ios=1334/0, merge=0/0, ticks=2511/0, in_queue=2511, util=96.42% 00:17:14.865 21:33:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:14.865 21:33:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:15.125 21:33:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:15.125 21:33:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:15.125 21:33:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:15.125 21:33:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:15.386 21:33:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:15.386 21:33:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:15.646 21:33:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:15.646 21:33:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2161092 00:17:15.646 21:33:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:15.646 21:33:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:15.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.646 21:33:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:15.646 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:15.646 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:15.646 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.646 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:15.646 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.646 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:15.646 21:33:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:15.646 21:33:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:15.646 nvmf hotplug test: fio failed as expected 00:17:15.646 21:33:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:15.907 rmmod nvme_tcp 00:17:15.907 rmmod nvme_fabrics 00:17:15.907 rmmod nvme_keyring 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2157508 ']' 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2157508 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2157508 ']' 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2157508 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2157508 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2157508' 00:17:15.907 killing process with pid 2157508 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2157508 00:17:15.907 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2157508 00:17:16.168 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:16.168 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:16.168 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:16.168 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:16.168 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:16.168 21:33:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.168 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.168 21:33:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.085 21:33:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:18.085 00:17:18.085 real 0m28.177s 00:17:18.085 user 2m39.807s 00:17:18.085 sys 0m8.891s 00:17:18.085 21:33:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:18.085 21:33:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.085 ************************************ 00:17:18.085 END TEST nvmf_fio_target 00:17:18.085 ************************************ 00:17:18.347 21:33:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:18.347 21:33:07 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:18.347 21:33:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:18.347 21:33:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.347 21:33:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:18.347 ************************************ 00:17:18.347 START TEST nvmf_bdevio 00:17:18.347 ************************************ 00:17:18.347 21:33:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:18.347 * Looking for test storage... 00:17:18.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.347 21:33:08 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:18.348 21:33:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:26.501 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:26.501 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:26.501 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:26.501 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.501 21:33:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.501 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.501 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.501 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:26.501 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:26.501 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:26.501 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:26.501 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:26.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:17:26.501 00:17:26.501 --- 10.0.0.2 ping statistics --- 00:17:26.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.501 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:17:26.501 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:26.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:17:26.501 00:17:26.501 --- 10.0.0.1 ping statistics --- 00:17:26.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.501 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:17:26.501 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.501 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:26.501 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:26.501 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.501 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2166836 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2166836 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2166836 ']' 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:26.502 21:33:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:26.502 [2024-07-15 21:33:15.312866] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:17:26.502 [2024-07-15 21:33:15.312917] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.502 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.502 [2024-07-15 21:33:15.395847] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:26.502 [2024-07-15 21:33:15.460353] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.502 [2024-07-15 21:33:15.460389] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.502 [2024-07-15 21:33:15.460397] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.502 [2024-07-15 21:33:15.460403] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.502 [2024-07-15 21:33:15.460409] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.502 [2024-07-15 21:33:15.460550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:26.502 [2024-07-15 21:33:15.460684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:26.502 [2024-07-15 21:33:15.460835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:26.502 [2024-07-15 21:33:15.460835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:26.502 [2024-07-15 21:33:16.145902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:26.502 Malloc0 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:26.502 [2024-07-15 21:33:16.211660] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:26.502 { 00:17:26.502 "params": { 00:17:26.502 "name": "Nvme$subsystem", 00:17:26.502 "trtype": "$TEST_TRANSPORT", 00:17:26.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.502 "adrfam": "ipv4", 00:17:26.502 "trsvcid": "$NVMF_PORT", 00:17:26.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.502 "hdgst": ${hdgst:-false}, 00:17:26.502 "ddgst": ${ddgst:-false} 00:17:26.502 }, 00:17:26.502 "method": "bdev_nvme_attach_controller" 00:17:26.502 } 00:17:26.502 EOF 00:17:26.502 )") 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:26.502 21:33:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:26.502 "params": { 00:17:26.502 "name": "Nvme1", 00:17:26.502 "trtype": "tcp", 00:17:26.502 "traddr": "10.0.0.2", 00:17:26.502 "adrfam": "ipv4", 00:17:26.502 "trsvcid": "4420", 00:17:26.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:26.502 "hdgst": false, 00:17:26.502 "ddgst": false 00:17:26.502 }, 00:17:26.502 "method": "bdev_nvme_attach_controller" 00:17:26.502 }' 00:17:26.502 [2024-07-15 21:33:16.267027] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:17:26.502 [2024-07-15 21:33:16.267099] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2167049 ] 00:17:26.502 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.775 [2024-07-15 21:33:16.333515] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:26.775 [2024-07-15 21:33:16.409826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.775 [2024-07-15 21:33:16.409946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.775 [2024-07-15 21:33:16.409949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.775 I/O targets: 00:17:26.775 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:26.775 00:17:26.775 00:17:26.775 CUnit - A unit testing framework for C - Version 2.1-3 00:17:26.775 http://cunit.sourceforge.net/ 00:17:26.775 00:17:26.775 00:17:26.775 Suite: bdevio tests on: Nvme1n1 00:17:27.060 Test: blockdev write read block ...passed 00:17:27.060 Test: blockdev write zeroes read block ...passed 00:17:27.060 Test: blockdev write zeroes read no split ...passed 00:17:27.060 Test: blockdev write zeroes read split ...passed 00:17:27.060 Test: blockdev write zeroes read split partial ...passed 00:17:27.060 Test: blockdev reset ...[2024-07-15 21:33:16.688314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:27.060 [2024-07-15 21:33:16.688382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x802ed0 (9): Bad file descriptor 00:17:27.060 [2024-07-15 21:33:16.701485] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:27.060 passed 00:17:27.060 Test: blockdev write read 8 blocks ...passed 00:17:27.060 Test: blockdev write read size > 128k ...passed 00:17:27.060 Test: blockdev write read invalid size ...passed 00:17:27.060 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:27.060 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:27.060 Test: blockdev write read max offset ...passed 00:17:27.321 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:27.321 Test: blockdev writev readv 8 blocks ...passed 00:17:27.321 Test: blockdev writev readv 30 x 1block ...passed 00:17:27.321 Test: blockdev writev readv block ...passed 00:17:27.321 Test: blockdev writev readv size > 128k ...passed 00:17:27.321 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:27.321 Test: blockdev comparev and writev ...[2024-07-15 21:33:16.931790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:27.321 [2024-07-15 21:33:16.931816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:27.321 [2024-07-15 21:33:16.931827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:27.321 [2024-07-15 21:33:16.931833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:27.321 [2024-07-15 21:33:16.932399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:27.321 [2024-07-15 21:33:16.932407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:27.321 [2024-07-15 21:33:16.932417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:27.321 [2024-07-15 21:33:16.932422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:27.321 [2024-07-15 21:33:16.932971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:27.321 [2024-07-15 21:33:16.932979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:27.321 [2024-07-15 21:33:16.932988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:27.321 [2024-07-15 21:33:16.932994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:27.321 [2024-07-15 21:33:16.933593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:27.321 [2024-07-15 21:33:16.933602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:27.321 [2024-07-15 21:33:16.933612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:27.321 [2024-07-15 21:33:16.933617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:27.321 passed 00:17:27.321 Test: blockdev nvme passthru rw ...passed 00:17:27.321 Test: blockdev nvme passthru vendor specific ...[2024-07-15 21:33:17.018129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:27.321 [2024-07-15 21:33:17.018140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:27.321 [2024-07-15 21:33:17.018553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:27.321 [2024-07-15 21:33:17.018559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:27.321 [2024-07-15 21:33:17.019033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:27.321 [2024-07-15 21:33:17.019040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:27.321 [2024-07-15 21:33:17.019487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:27.321 [2024-07-15 21:33:17.019494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:27.321 passed 00:17:27.321 Test: blockdev nvme admin passthru ...passed 00:17:27.321 Test: blockdev copy ...passed 00:17:27.321 00:17:27.321 Run Summary: Type Total Ran Passed Failed Inactive 00:17:27.321 suites 1 1 n/a 0 0 00:17:27.321 tests 23 23 23 0 0 00:17:27.321 asserts 152 152 152 0 n/a 00:17:27.321 00:17:27.321 Elapsed time = 1.063 seconds 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:27.582 rmmod nvme_tcp 00:17:27.582 rmmod nvme_fabrics 00:17:27.582 rmmod nvme_keyring 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2166836 ']' 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2166836 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2166836 ']' 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2166836 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2166836 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2166836' 00:17:27.582 killing process with pid 2166836 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2166836 00:17:27.582 21:33:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2166836 00:17:27.844 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:27.844 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:27.844 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:27.844 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:27.844 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:27.844 21:33:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.844 21:33:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.844 21:33:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.762 21:33:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:29.762 00:17:29.762 real 0m11.592s 00:17:29.762 user 0m11.851s 00:17:29.762 sys 0m5.903s 00:17:29.762 21:33:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:29.762 21:33:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:29.762 ************************************ 00:17:29.762 END TEST nvmf_bdevio 00:17:29.762 ************************************ 00:17:30.048 21:33:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:30.048 21:33:19 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:30.048 21:33:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:30.048 21:33:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:30.048 21:33:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:30.048 ************************************ 00:17:30.048 START TEST nvmf_auth_target 00:17:30.048 ************************************ 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:30.048 * Looking for test storage... 00:17:30.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:30.048 21:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:30.049 21:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:38.194 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:38.194 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:38.194 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:38.194 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:38.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:17:38.194 00:17:38.194 --- 10.0.0.2 ping statistics --- 00:17:38.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.194 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:17:38.194 00:17:38.194 --- 10.0.0.1 ping statistics --- 00:17:38.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.194 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:38.194 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.195 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:38.195 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:38.195 21:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:38.195 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:38.195 21:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:38.195 21:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.195 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2171437 00:17:38.195 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2171437 00:17:38.195 21:33:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:38.195 21:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2171437 ']' 00:17:38.195 21:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.195 21:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.195 21:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.195 21:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.195 21:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2171544 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ed0480527382b58efe91b45a6166e9f884565d2aa661135a 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.XNd 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ed0480527382b58efe91b45a6166e9f884565d2aa661135a 0 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ed0480527382b58efe91b45a6166e9f884565d2aa661135a 0 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ed0480527382b58efe91b45a6166e9f884565d2aa661135a 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.XNd 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.XNd 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.XNd 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dfafabda052f55e7544e994f41948bd019973c5d7ec5b2594377dad0c0dfbe67 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.hNV 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dfafabda052f55e7544e994f41948bd019973c5d7ec5b2594377dad0c0dfbe67 3 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dfafabda052f55e7544e994f41948bd019973c5d7ec5b2594377dad0c0dfbe67 3 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dfafabda052f55e7544e994f41948bd019973c5d7ec5b2594377dad0c0dfbe67 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.hNV 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.hNV 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.hNV 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7199539fade822ccf52b84e0f496b26d 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.LfR 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7199539fade822ccf52b84e0f496b26d 1 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7199539fade822ccf52b84e0f496b26d 1 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7199539fade822ccf52b84e0f496b26d 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:38.195 21:33:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.LfR 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.LfR 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.LfR 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=195e6082668ec05cfe84d76a6fe5aafd4220117a38c803fd 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JNI 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 195e6082668ec05cfe84d76a6fe5aafd4220117a38c803fd 2 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 195e6082668ec05cfe84d76a6fe5aafd4220117a38c803fd 2 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=195e6082668ec05cfe84d76a6fe5aafd4220117a38c803fd 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JNI 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JNI 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.JNI 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fec914498237dfdc2f055a38bfcfdeeef5d7b1880ffc36c4 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.KDu 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fec914498237dfdc2f055a38bfcfdeeef5d7b1880ffc36c4 2 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fec914498237dfdc2f055a38bfcfdeeef5d7b1880ffc36c4 2 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fec914498237dfdc2f055a38bfcfdeeef5d7b1880ffc36c4 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.KDu 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.KDu 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.KDu 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fd5b6f1681d110bd5aa1d8c4a31bd787 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.O09 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fd5b6f1681d110bd5aa1d8c4a31bd787 1 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fd5b6f1681d110bd5aa1d8c4a31bd787 1 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fd5b6f1681d110bd5aa1d8c4a31bd787 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.O09 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.O09 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.O09 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=388ad01f8b5928118ccac278ac6b44206e7b57320043262cf1030360c84e8721 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.94Z 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 388ad01f8b5928118ccac278ac6b44206e7b57320043262cf1030360c84e8721 3 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 388ad01f8b5928118ccac278ac6b44206e7b57320043262cf1030360c84e8721 3 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=388ad01f8b5928118ccac278ac6b44206e7b57320043262cf1030360c84e8721 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.94Z 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.94Z 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.94Z 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2171437 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2171437 ']' 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.456 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.717 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.717 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:38.717 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2171544 /var/tmp/host.sock 00:17:38.717 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2171544 ']' 00:17:38.717 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:38.717 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.717 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:38.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:38.717 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.717 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.XNd 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.XNd 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.XNd 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.hNV ]] 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hNV 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hNV 00:17:38.979 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hNV 00:17:39.240 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:39.240 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.LfR 00:17:39.240 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.240 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.240 21:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.240 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.LfR 00:17:39.240 21:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.LfR 00:17:39.500 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.JNI ]] 00:17:39.500 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JNI 00:17:39.500 21:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.500 21:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.500 21:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.500 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JNI 00:17:39.500 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JNI 00:17:39.500 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:39.500 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.KDu 00:17:39.500 21:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.500 21:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.500 21:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.500 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.KDu 00:17:39.500 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.KDu 00:17:39.761 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.O09 ]] 00:17:39.761 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.O09 00:17:39.761 21:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.761 21:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.761 21:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.761 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.O09 00:17:39.761 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.O09 00:17:40.021 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:40.021 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.94Z 00:17:40.021 21:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.021 21:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.021 21:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.021 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.94Z 00:17:40.021 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.94Z 00:17:40.021 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:40.021 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:40.021 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.021 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.021 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:40.021 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:40.282 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:40.282 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.282 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:40.282 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:40.282 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:40.282 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.282 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.282 21:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.282 21:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.282 21:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.282 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.282 21:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.543 00:17:40.543 21:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.543 21:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.543 21:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.803 21:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.803 21:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.803 21:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.803 21:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.803 21:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.803 21:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.803 { 00:17:40.803 "cntlid": 1, 00:17:40.803 "qid": 0, 00:17:40.803 "state": "enabled", 00:17:40.803 "thread": "nvmf_tgt_poll_group_000", 00:17:40.803 "listen_address": { 00:17:40.803 "trtype": "TCP", 00:17:40.803 "adrfam": "IPv4", 00:17:40.803 "traddr": "10.0.0.2", 00:17:40.803 "trsvcid": "4420" 00:17:40.803 }, 00:17:40.803 "peer_address": { 00:17:40.803 "trtype": "TCP", 00:17:40.803 "adrfam": "IPv4", 00:17:40.803 "traddr": "10.0.0.1", 00:17:40.803 "trsvcid": "59040" 00:17:40.803 }, 00:17:40.803 "auth": { 00:17:40.803 "state": "completed", 00:17:40.803 "digest": "sha256", 00:17:40.803 "dhgroup": "null" 00:17:40.803 } 00:17:40.803 } 00:17:40.803 ]' 00:17:40.803 21:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.803 21:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.803 21:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.803 21:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:40.803 21:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.803 21:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.803 21:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.803 21:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.064 21:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:17:41.635 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.636 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.636 21:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.636 21:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.896 21:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.897 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.897 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.897 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.897 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:41.897 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.897 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.897 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:41.897 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:41.897 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.897 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.897 21:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.897 21:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.897 21:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.897 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.897 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.157 00:17:42.157 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.157 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.157 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.418 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.418 21:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.418 21:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.418 21:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.418 21:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.418 21:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.418 { 00:17:42.418 "cntlid": 3, 00:17:42.418 "qid": 0, 00:17:42.418 "state": "enabled", 00:17:42.418 "thread": "nvmf_tgt_poll_group_000", 00:17:42.418 "listen_address": { 00:17:42.418 "trtype": "TCP", 00:17:42.418 "adrfam": "IPv4", 00:17:42.418 "traddr": "10.0.0.2", 00:17:42.418 "trsvcid": "4420" 00:17:42.418 }, 00:17:42.418 "peer_address": { 00:17:42.418 "trtype": "TCP", 00:17:42.418 "adrfam": "IPv4", 00:17:42.418 "traddr": "10.0.0.1", 00:17:42.418 "trsvcid": "59060" 00:17:42.418 }, 00:17:42.418 "auth": { 00:17:42.418 "state": "completed", 00:17:42.418 "digest": "sha256", 00:17:42.418 "dhgroup": "null" 00:17:42.418 } 00:17:42.418 } 00:17:42.418 ]' 00:17:42.418 21:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.418 21:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.418 21:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.418 21:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:42.418 21:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.418 21:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.418 21:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.418 21:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.678 21:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:17:43.257 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.518 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.519 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.779 00:17:43.779 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.779 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.779 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.040 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.040 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.040 21:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.040 21:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.040 21:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.040 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.040 { 00:17:44.040 "cntlid": 5, 00:17:44.040 "qid": 0, 00:17:44.040 "state": "enabled", 00:17:44.040 "thread": "nvmf_tgt_poll_group_000", 00:17:44.040 "listen_address": { 00:17:44.040 "trtype": "TCP", 00:17:44.040 "adrfam": "IPv4", 00:17:44.040 "traddr": "10.0.0.2", 00:17:44.040 "trsvcid": "4420" 00:17:44.040 }, 00:17:44.040 "peer_address": { 00:17:44.040 "trtype": "TCP", 00:17:44.040 "adrfam": "IPv4", 00:17:44.040 "traddr": "10.0.0.1", 00:17:44.040 "trsvcid": "59082" 00:17:44.040 }, 00:17:44.040 "auth": { 00:17:44.040 "state": "completed", 00:17:44.040 "digest": "sha256", 00:17:44.040 "dhgroup": "null" 00:17:44.040 } 00:17:44.040 } 00:17:44.040 ]' 00:17:44.040 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.040 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.040 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.040 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:44.040 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.040 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.040 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.040 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.301 21:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:17:44.871 21:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.871 21:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.871 21:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.871 21:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.871 21:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.871 21:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.871 21:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:44.871 21:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:45.131 21:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:45.131 21:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.131 21:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:45.131 21:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:45.131 21:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:45.131 21:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.131 21:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:45.131 21:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.131 21:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.131 21:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.131 21:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.131 21:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.391 00:17:45.391 21:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.391 21:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.391 21:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.391 21:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.391 21:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.391 21:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.391 21:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.391 21:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.391 21:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.391 { 00:17:45.391 "cntlid": 7, 00:17:45.391 "qid": 0, 00:17:45.391 "state": "enabled", 00:17:45.391 "thread": "nvmf_tgt_poll_group_000", 00:17:45.391 "listen_address": { 00:17:45.391 "trtype": "TCP", 00:17:45.391 "adrfam": "IPv4", 00:17:45.391 "traddr": "10.0.0.2", 00:17:45.391 "trsvcid": "4420" 00:17:45.391 }, 00:17:45.391 "peer_address": { 00:17:45.391 "trtype": "TCP", 00:17:45.391 "adrfam": "IPv4", 00:17:45.391 "traddr": "10.0.0.1", 00:17:45.391 "trsvcid": "59106" 00:17:45.391 }, 00:17:45.391 "auth": { 00:17:45.391 "state": "completed", 00:17:45.391 "digest": "sha256", 00:17:45.391 "dhgroup": "null" 00:17:45.391 } 00:17:45.391 } 00:17:45.391 ]' 00:17:45.391 21:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.652 21:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.652 21:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.652 21:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:45.652 21:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.652 21:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.652 21:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.652 21:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.912 21:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:17:46.484 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.484 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.484 21:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.484 21:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.484 21:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.484 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.484 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.484 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:46.484 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:46.748 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:46.748 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.748 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:46.748 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:46.748 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:46.748 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.748 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.748 21:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.748 21:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.748 21:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.748 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.748 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.057 00:17:47.057 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.057 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.057 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.057 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.057 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.057 21:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.057 21:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.057 21:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.057 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.057 { 00:17:47.057 "cntlid": 9, 00:17:47.057 "qid": 0, 00:17:47.057 "state": "enabled", 00:17:47.057 "thread": "nvmf_tgt_poll_group_000", 00:17:47.057 "listen_address": { 00:17:47.057 "trtype": "TCP", 00:17:47.057 "adrfam": "IPv4", 00:17:47.057 "traddr": "10.0.0.2", 00:17:47.057 "trsvcid": "4420" 00:17:47.057 }, 00:17:47.057 "peer_address": { 00:17:47.057 "trtype": "TCP", 00:17:47.057 "adrfam": "IPv4", 00:17:47.057 "traddr": "10.0.0.1", 00:17:47.057 "trsvcid": "59128" 00:17:47.057 }, 00:17:47.057 "auth": { 00:17:47.057 "state": "completed", 00:17:47.057 "digest": "sha256", 00:17:47.057 "dhgroup": "ffdhe2048" 00:17:47.057 } 00:17:47.057 } 00:17:47.057 ]' 00:17:47.057 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.057 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.057 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.319 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:47.319 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.319 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.319 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.319 21:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.319 21:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.262 21:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.523 00:17:48.523 21:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.523 21:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.524 21:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.784 21:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.784 21:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.784 21:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.784 21:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.784 21:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.784 21:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.784 { 00:17:48.784 "cntlid": 11, 00:17:48.784 "qid": 0, 00:17:48.784 "state": "enabled", 00:17:48.784 "thread": "nvmf_tgt_poll_group_000", 00:17:48.784 "listen_address": { 00:17:48.784 "trtype": "TCP", 00:17:48.784 "adrfam": "IPv4", 00:17:48.784 "traddr": "10.0.0.2", 00:17:48.784 "trsvcid": "4420" 00:17:48.784 }, 00:17:48.784 "peer_address": { 00:17:48.784 "trtype": "TCP", 00:17:48.784 "adrfam": "IPv4", 00:17:48.784 "traddr": "10.0.0.1", 00:17:48.784 "trsvcid": "51138" 00:17:48.784 }, 00:17:48.784 "auth": { 00:17:48.784 "state": "completed", 00:17:48.784 "digest": "sha256", 00:17:48.784 "dhgroup": "ffdhe2048" 00:17:48.784 } 00:17:48.784 } 00:17:48.784 ]' 00:17:48.784 21:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.784 21:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.784 21:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.784 21:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.784 21:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.784 21:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.784 21:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.784 21:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.045 21:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.988 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.248 00:17:50.248 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.248 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.248 21:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.248 21:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.248 21:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.248 21:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.248 21:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.509 21:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.509 21:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.509 { 00:17:50.509 "cntlid": 13, 00:17:50.509 "qid": 0, 00:17:50.509 "state": "enabled", 00:17:50.509 "thread": "nvmf_tgt_poll_group_000", 00:17:50.509 "listen_address": { 00:17:50.509 "trtype": "TCP", 00:17:50.509 "adrfam": "IPv4", 00:17:50.509 "traddr": "10.0.0.2", 00:17:50.509 "trsvcid": "4420" 00:17:50.509 }, 00:17:50.509 "peer_address": { 00:17:50.509 "trtype": "TCP", 00:17:50.509 "adrfam": "IPv4", 00:17:50.509 "traddr": "10.0.0.1", 00:17:50.509 "trsvcid": "51174" 00:17:50.509 }, 00:17:50.509 "auth": { 00:17:50.509 "state": "completed", 00:17:50.509 "digest": "sha256", 00:17:50.509 "dhgroup": "ffdhe2048" 00:17:50.509 } 00:17:50.509 } 00:17:50.509 ]' 00:17:50.509 21:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.509 21:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.509 21:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.509 21:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.509 21:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.509 21:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.509 21:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.509 21:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.770 21:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:17:51.341 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.341 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.341 21:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.341 21:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.341 21:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.341 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.341 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:51.341 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:51.602 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:51.602 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.602 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:51.602 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:51.602 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:51.602 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.602 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:51.602 21:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.602 21:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.602 21:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.602 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.602 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.863 00:17:51.863 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.864 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.864 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.123 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.124 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.124 21:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.124 21:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.124 21:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.124 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.124 { 00:17:52.124 "cntlid": 15, 00:17:52.124 "qid": 0, 00:17:52.124 "state": "enabled", 00:17:52.124 "thread": "nvmf_tgt_poll_group_000", 00:17:52.124 "listen_address": { 00:17:52.124 "trtype": "TCP", 00:17:52.124 "adrfam": "IPv4", 00:17:52.124 "traddr": "10.0.0.2", 00:17:52.124 "trsvcid": "4420" 00:17:52.124 }, 00:17:52.124 "peer_address": { 00:17:52.124 "trtype": "TCP", 00:17:52.124 "adrfam": "IPv4", 00:17:52.124 "traddr": "10.0.0.1", 00:17:52.124 "trsvcid": "51218" 00:17:52.124 }, 00:17:52.124 "auth": { 00:17:52.124 "state": "completed", 00:17:52.124 "digest": "sha256", 00:17:52.124 "dhgroup": "ffdhe2048" 00:17:52.124 } 00:17:52.124 } 00:17:52.124 ]' 00:17:52.124 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.124 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.124 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.124 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.124 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.124 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.124 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.124 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.384 21:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:17:52.955 21:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.955 21:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.955 21:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.955 21:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.955 21:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.955 21:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.955 21:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.955 21:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:52.955 21:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:53.215 21:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:53.215 21:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.215 21:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:53.215 21:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:53.215 21:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:53.215 21:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.215 21:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.215 21:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.215 21:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.215 21:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.215 21:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.215 21:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.475 00:17:53.475 21:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.475 21:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.475 21:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.735 21:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.735 21:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.735 21:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.735 21:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.735 21:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.735 21:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.735 { 00:17:53.735 "cntlid": 17, 00:17:53.735 "qid": 0, 00:17:53.735 "state": "enabled", 00:17:53.735 "thread": "nvmf_tgt_poll_group_000", 00:17:53.735 "listen_address": { 00:17:53.735 "trtype": "TCP", 00:17:53.735 "adrfam": "IPv4", 00:17:53.735 "traddr": "10.0.0.2", 00:17:53.735 "trsvcid": "4420" 00:17:53.735 }, 00:17:53.735 "peer_address": { 00:17:53.735 "trtype": "TCP", 00:17:53.735 "adrfam": "IPv4", 00:17:53.735 "traddr": "10.0.0.1", 00:17:53.735 "trsvcid": "51248" 00:17:53.735 }, 00:17:53.735 "auth": { 00:17:53.735 "state": "completed", 00:17:53.735 "digest": "sha256", 00:17:53.735 "dhgroup": "ffdhe3072" 00:17:53.735 } 00:17:53.735 } 00:17:53.735 ]' 00:17:53.735 21:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.735 21:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.735 21:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.735 21:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:53.735 21:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.735 21:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.735 21:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.735 21:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.994 21:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:17:54.562 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.822 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.082 00:17:55.082 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.082 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.082 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.342 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.342 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.342 21:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.342 21:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.342 21:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.342 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.342 { 00:17:55.342 "cntlid": 19, 00:17:55.342 "qid": 0, 00:17:55.342 "state": "enabled", 00:17:55.342 "thread": "nvmf_tgt_poll_group_000", 00:17:55.342 "listen_address": { 00:17:55.342 "trtype": "TCP", 00:17:55.342 "adrfam": "IPv4", 00:17:55.342 "traddr": "10.0.0.2", 00:17:55.342 "trsvcid": "4420" 00:17:55.342 }, 00:17:55.342 "peer_address": { 00:17:55.342 "trtype": "TCP", 00:17:55.342 "adrfam": "IPv4", 00:17:55.342 "traddr": "10.0.0.1", 00:17:55.342 "trsvcid": "51282" 00:17:55.342 }, 00:17:55.342 "auth": { 00:17:55.342 "state": "completed", 00:17:55.342 "digest": "sha256", 00:17:55.342 "dhgroup": "ffdhe3072" 00:17:55.342 } 00:17:55.342 } 00:17:55.342 ]' 00:17:55.342 21:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.342 21:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.342 21:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.342 21:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:55.342 21:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.342 21:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.342 21:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.342 21:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.602 21:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.542 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.804 00:17:56.804 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.804 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.804 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.065 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.065 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.065 21:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.065 21:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.065 21:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.065 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.065 { 00:17:57.065 "cntlid": 21, 00:17:57.065 "qid": 0, 00:17:57.065 "state": "enabled", 00:17:57.065 "thread": "nvmf_tgt_poll_group_000", 00:17:57.065 "listen_address": { 00:17:57.065 "trtype": "TCP", 00:17:57.065 "adrfam": "IPv4", 00:17:57.065 "traddr": "10.0.0.2", 00:17:57.065 "trsvcid": "4420" 00:17:57.065 }, 00:17:57.065 "peer_address": { 00:17:57.065 "trtype": "TCP", 00:17:57.065 "adrfam": "IPv4", 00:17:57.065 "traddr": "10.0.0.1", 00:17:57.065 "trsvcid": "51306" 00:17:57.065 }, 00:17:57.065 "auth": { 00:17:57.065 "state": "completed", 00:17:57.065 "digest": "sha256", 00:17:57.065 "dhgroup": "ffdhe3072" 00:17:57.065 } 00:17:57.065 } 00:17:57.065 ]' 00:17:57.065 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.065 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.065 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.065 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:57.065 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.065 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.065 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.065 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.326 21:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:17:57.898 21:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.159 21:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.419 00:17:58.419 21:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.419 21:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.419 21:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.680 21:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.680 21:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.680 21:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.680 21:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.680 21:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.680 21:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.680 { 00:17:58.680 "cntlid": 23, 00:17:58.680 "qid": 0, 00:17:58.680 "state": "enabled", 00:17:58.680 "thread": "nvmf_tgt_poll_group_000", 00:17:58.680 "listen_address": { 00:17:58.680 "trtype": "TCP", 00:17:58.680 "adrfam": "IPv4", 00:17:58.680 "traddr": "10.0.0.2", 00:17:58.680 "trsvcid": "4420" 00:17:58.680 }, 00:17:58.680 "peer_address": { 00:17:58.680 "trtype": "TCP", 00:17:58.680 "adrfam": "IPv4", 00:17:58.680 "traddr": "10.0.0.1", 00:17:58.680 "trsvcid": "39420" 00:17:58.680 }, 00:17:58.680 "auth": { 00:17:58.680 "state": "completed", 00:17:58.680 "digest": "sha256", 00:17:58.680 "dhgroup": "ffdhe3072" 00:17:58.680 } 00:17:58.680 } 00:17:58.680 ]' 00:17:58.680 21:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.680 21:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.680 21:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.680 21:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:58.680 21:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.680 21:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.680 21:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.680 21:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.968 21:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:17:59.540 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.801 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.063 00:18:00.063 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.063 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.063 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.325 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.325 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.325 21:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.325 21:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.325 21:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.325 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.325 { 00:18:00.325 "cntlid": 25, 00:18:00.325 "qid": 0, 00:18:00.325 "state": "enabled", 00:18:00.325 "thread": "nvmf_tgt_poll_group_000", 00:18:00.325 "listen_address": { 00:18:00.325 "trtype": "TCP", 00:18:00.325 "adrfam": "IPv4", 00:18:00.325 "traddr": "10.0.0.2", 00:18:00.325 "trsvcid": "4420" 00:18:00.325 }, 00:18:00.325 "peer_address": { 00:18:00.325 "trtype": "TCP", 00:18:00.325 "adrfam": "IPv4", 00:18:00.325 "traddr": "10.0.0.1", 00:18:00.325 "trsvcid": "39434" 00:18:00.325 }, 00:18:00.325 "auth": { 00:18:00.325 "state": "completed", 00:18:00.325 "digest": "sha256", 00:18:00.325 "dhgroup": "ffdhe4096" 00:18:00.325 } 00:18:00.325 } 00:18:00.325 ]' 00:18:00.325 21:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.325 21:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.325 21:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.325 21:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.325 21:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.325 21:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.325 21:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.325 21:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.586 21:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.528 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.823 00:18:01.823 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.823 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.823 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.084 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.084 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.084 21:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.084 21:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.084 21:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.084 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.084 { 00:18:02.084 "cntlid": 27, 00:18:02.084 "qid": 0, 00:18:02.084 "state": "enabled", 00:18:02.084 "thread": "nvmf_tgt_poll_group_000", 00:18:02.084 "listen_address": { 00:18:02.084 "trtype": "TCP", 00:18:02.084 "adrfam": "IPv4", 00:18:02.084 "traddr": "10.0.0.2", 00:18:02.084 "trsvcid": "4420" 00:18:02.084 }, 00:18:02.084 "peer_address": { 00:18:02.084 "trtype": "TCP", 00:18:02.084 "adrfam": "IPv4", 00:18:02.084 "traddr": "10.0.0.1", 00:18:02.084 "trsvcid": "39460" 00:18:02.084 }, 00:18:02.084 "auth": { 00:18:02.084 "state": "completed", 00:18:02.084 "digest": "sha256", 00:18:02.084 "dhgroup": "ffdhe4096" 00:18:02.084 } 00:18:02.084 } 00:18:02.084 ]' 00:18:02.084 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.084 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.084 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.084 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.084 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.084 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.084 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.084 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.344 21:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.287 21:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.547 00:18:03.547 21:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.547 21:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.547 21:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.547 21:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.547 21:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.547 21:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.547 21:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.808 21:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.808 21:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.808 { 00:18:03.808 "cntlid": 29, 00:18:03.808 "qid": 0, 00:18:03.808 "state": "enabled", 00:18:03.808 "thread": "nvmf_tgt_poll_group_000", 00:18:03.808 "listen_address": { 00:18:03.808 "trtype": "TCP", 00:18:03.808 "adrfam": "IPv4", 00:18:03.808 "traddr": "10.0.0.2", 00:18:03.808 "trsvcid": "4420" 00:18:03.808 }, 00:18:03.808 "peer_address": { 00:18:03.808 "trtype": "TCP", 00:18:03.808 "adrfam": "IPv4", 00:18:03.808 "traddr": "10.0.0.1", 00:18:03.808 "trsvcid": "39492" 00:18:03.808 }, 00:18:03.808 "auth": { 00:18:03.808 "state": "completed", 00:18:03.808 "digest": "sha256", 00:18:03.808 "dhgroup": "ffdhe4096" 00:18:03.808 } 00:18:03.808 } 00:18:03.808 ]' 00:18:03.808 21:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.808 21:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.808 21:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.808 21:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.808 21:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.808 21:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.808 21:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.808 21:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.069 21:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:18:04.639 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.639 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.639 21:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.639 21:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.639 21:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.639 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.639 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:04.639 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:04.899 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:04.899 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.899 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.899 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:04.899 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:04.899 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.899 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:04.899 21:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.899 21:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.899 21:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.899 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.899 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.160 00:18:05.160 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.160 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.160 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.160 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.160 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.160 21:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.160 21:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.421 21:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.421 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.421 { 00:18:05.421 "cntlid": 31, 00:18:05.421 "qid": 0, 00:18:05.421 "state": "enabled", 00:18:05.421 "thread": "nvmf_tgt_poll_group_000", 00:18:05.421 "listen_address": { 00:18:05.421 "trtype": "TCP", 00:18:05.421 "adrfam": "IPv4", 00:18:05.421 "traddr": "10.0.0.2", 00:18:05.421 "trsvcid": "4420" 00:18:05.421 }, 00:18:05.421 "peer_address": { 00:18:05.421 "trtype": "TCP", 00:18:05.421 "adrfam": "IPv4", 00:18:05.421 "traddr": "10.0.0.1", 00:18:05.421 "trsvcid": "39522" 00:18:05.421 }, 00:18:05.421 "auth": { 00:18:05.421 "state": "completed", 00:18:05.421 "digest": "sha256", 00:18:05.421 "dhgroup": "ffdhe4096" 00:18:05.421 } 00:18:05.421 } 00:18:05.421 ]' 00:18:05.421 21:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.421 21:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.421 21:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.421 21:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.421 21:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.421 21:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.421 21:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.421 21:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.682 21:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:18:06.252 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.252 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.252 21:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.252 21:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.252 21:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.252 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.252 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.252 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:06.252 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:06.513 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:06.513 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.513 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.513 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:06.513 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:06.513 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.513 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.513 21:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.513 21:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.513 21:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.513 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.513 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.774 00:18:06.774 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.774 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.774 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.035 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.035 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.035 21:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.035 21:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.035 21:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.035 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.035 { 00:18:07.035 "cntlid": 33, 00:18:07.035 "qid": 0, 00:18:07.035 "state": "enabled", 00:18:07.035 "thread": "nvmf_tgt_poll_group_000", 00:18:07.035 "listen_address": { 00:18:07.035 "trtype": "TCP", 00:18:07.035 "adrfam": "IPv4", 00:18:07.035 "traddr": "10.0.0.2", 00:18:07.035 "trsvcid": "4420" 00:18:07.035 }, 00:18:07.035 "peer_address": { 00:18:07.035 "trtype": "TCP", 00:18:07.035 "adrfam": "IPv4", 00:18:07.035 "traddr": "10.0.0.1", 00:18:07.035 "trsvcid": "39542" 00:18:07.035 }, 00:18:07.035 "auth": { 00:18:07.035 "state": "completed", 00:18:07.035 "digest": "sha256", 00:18:07.035 "dhgroup": "ffdhe6144" 00:18:07.035 } 00:18:07.035 } 00:18:07.035 ]' 00:18:07.035 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.035 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.035 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.296 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.296 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.296 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.296 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.296 21:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.296 21:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.240 21:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.240 21:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.240 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.240 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.810 00:18:08.810 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.810 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.810 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.810 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.810 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.810 21:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.810 21:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.810 21:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.810 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.810 { 00:18:08.810 "cntlid": 35, 00:18:08.810 "qid": 0, 00:18:08.810 "state": "enabled", 00:18:08.810 "thread": "nvmf_tgt_poll_group_000", 00:18:08.810 "listen_address": { 00:18:08.810 "trtype": "TCP", 00:18:08.810 "adrfam": "IPv4", 00:18:08.810 "traddr": "10.0.0.2", 00:18:08.810 "trsvcid": "4420" 00:18:08.810 }, 00:18:08.810 "peer_address": { 00:18:08.810 "trtype": "TCP", 00:18:08.810 "adrfam": "IPv4", 00:18:08.810 "traddr": "10.0.0.1", 00:18:08.810 "trsvcid": "39302" 00:18:08.810 }, 00:18:08.810 "auth": { 00:18:08.810 "state": "completed", 00:18:08.810 "digest": "sha256", 00:18:08.810 "dhgroup": "ffdhe6144" 00:18:08.810 } 00:18:08.810 } 00:18:08.810 ]' 00:18:08.810 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.810 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.810 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.071 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:09.071 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.071 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.071 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.071 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.071 21:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:18:10.013 21:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.013 21:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.013 21:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.013 21:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.013 21:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.013 21:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.013 21:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:10.014 21:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:10.014 21:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:10.014 21:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.014 21:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:10.014 21:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:10.014 21:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:10.014 21:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.014 21:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.014 21:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.014 21:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.014 21:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.014 21:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.014 21:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.585 00:18:10.585 21:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.585 21:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.585 21:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.585 21:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.585 21:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.585 21:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.585 21:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.585 21:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.585 21:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.585 { 00:18:10.585 "cntlid": 37, 00:18:10.585 "qid": 0, 00:18:10.585 "state": "enabled", 00:18:10.585 "thread": "nvmf_tgt_poll_group_000", 00:18:10.585 "listen_address": { 00:18:10.585 "trtype": "TCP", 00:18:10.585 "adrfam": "IPv4", 00:18:10.585 "traddr": "10.0.0.2", 00:18:10.585 "trsvcid": "4420" 00:18:10.585 }, 00:18:10.585 "peer_address": { 00:18:10.585 "trtype": "TCP", 00:18:10.585 "adrfam": "IPv4", 00:18:10.585 "traddr": "10.0.0.1", 00:18:10.585 "trsvcid": "39332" 00:18:10.585 }, 00:18:10.585 "auth": { 00:18:10.585 "state": "completed", 00:18:10.585 "digest": "sha256", 00:18:10.585 "dhgroup": "ffdhe6144" 00:18:10.585 } 00:18:10.585 } 00:18:10.585 ]' 00:18:10.585 21:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.585 21:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.585 21:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.846 21:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.846 21:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.846 21:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.846 21:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.846 21:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.846 21:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:18:11.789 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.789 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.789 21:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.789 21:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.790 21:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.790 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.790 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:11.790 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:11.790 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:11.790 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.790 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.790 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:11.790 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:11.790 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.790 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:11.790 21:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.790 21:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.790 21:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.790 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.790 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.360 00:18:12.360 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.360 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.360 21:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.360 21:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.360 21:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.360 21:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.360 21:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.360 21:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.360 21:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.360 { 00:18:12.360 "cntlid": 39, 00:18:12.360 "qid": 0, 00:18:12.360 "state": "enabled", 00:18:12.360 "thread": "nvmf_tgt_poll_group_000", 00:18:12.360 "listen_address": { 00:18:12.360 "trtype": "TCP", 00:18:12.360 "adrfam": "IPv4", 00:18:12.360 "traddr": "10.0.0.2", 00:18:12.360 "trsvcid": "4420" 00:18:12.360 }, 00:18:12.360 "peer_address": { 00:18:12.360 "trtype": "TCP", 00:18:12.360 "adrfam": "IPv4", 00:18:12.360 "traddr": "10.0.0.1", 00:18:12.360 "trsvcid": "39362" 00:18:12.360 }, 00:18:12.360 "auth": { 00:18:12.360 "state": "completed", 00:18:12.360 "digest": "sha256", 00:18:12.360 "dhgroup": "ffdhe6144" 00:18:12.360 } 00:18:12.360 } 00:18:12.360 ]' 00:18:12.360 21:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.360 21:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.360 21:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.621 21:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.621 21:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.621 21:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.621 21:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.621 21:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.621 21:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.565 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.136 00:18:14.136 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.136 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.136 21:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.396 21:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.396 21:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.396 21:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.396 21:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.396 21:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.396 21:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.396 { 00:18:14.396 "cntlid": 41, 00:18:14.396 "qid": 0, 00:18:14.396 "state": "enabled", 00:18:14.396 "thread": "nvmf_tgt_poll_group_000", 00:18:14.396 "listen_address": { 00:18:14.396 "trtype": "TCP", 00:18:14.396 "adrfam": "IPv4", 00:18:14.396 "traddr": "10.0.0.2", 00:18:14.396 "trsvcid": "4420" 00:18:14.396 }, 00:18:14.396 "peer_address": { 00:18:14.396 "trtype": "TCP", 00:18:14.396 "adrfam": "IPv4", 00:18:14.396 "traddr": "10.0.0.1", 00:18:14.396 "trsvcid": "39386" 00:18:14.396 }, 00:18:14.396 "auth": { 00:18:14.396 "state": "completed", 00:18:14.396 "digest": "sha256", 00:18:14.396 "dhgroup": "ffdhe8192" 00:18:14.396 } 00:18:14.396 } 00:18:14.396 ]' 00:18:14.396 21:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.396 21:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.396 21:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.396 21:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.396 21:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.396 21:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.396 21:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.396 21:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.657 21:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:18:15.598 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.599 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.169 00:18:16.169 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.169 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.169 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.169 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.169 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.169 21:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.169 21:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.169 21:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.169 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.169 { 00:18:16.169 "cntlid": 43, 00:18:16.169 "qid": 0, 00:18:16.169 "state": "enabled", 00:18:16.169 "thread": "nvmf_tgt_poll_group_000", 00:18:16.169 "listen_address": { 00:18:16.169 "trtype": "TCP", 00:18:16.169 "adrfam": "IPv4", 00:18:16.169 "traddr": "10.0.0.2", 00:18:16.169 "trsvcid": "4420" 00:18:16.169 }, 00:18:16.169 "peer_address": { 00:18:16.169 "trtype": "TCP", 00:18:16.169 "adrfam": "IPv4", 00:18:16.169 "traddr": "10.0.0.1", 00:18:16.169 "trsvcid": "39412" 00:18:16.169 }, 00:18:16.169 "auth": { 00:18:16.169 "state": "completed", 00:18:16.169 "digest": "sha256", 00:18:16.169 "dhgroup": "ffdhe8192" 00:18:16.169 } 00:18:16.169 } 00:18:16.169 ]' 00:18:16.169 21:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.429 21:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.429 21:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.429 21:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.429 21:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.429 21:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.429 21:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.429 21:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.688 21:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:18:17.306 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.306 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.306 21:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.306 21:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.306 21:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.306 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.306 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:17.306 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:17.565 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:17.565 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.565 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.565 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:17.565 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:17.565 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.565 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.565 21:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.565 21:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.565 21:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.565 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.565 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.134 00:18:18.134 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.134 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.134 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.134 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.134 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.134 21:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.134 21:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.134 21:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.134 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.134 { 00:18:18.134 "cntlid": 45, 00:18:18.134 "qid": 0, 00:18:18.134 "state": "enabled", 00:18:18.134 "thread": "nvmf_tgt_poll_group_000", 00:18:18.134 "listen_address": { 00:18:18.134 "trtype": "TCP", 00:18:18.134 "adrfam": "IPv4", 00:18:18.134 "traddr": "10.0.0.2", 00:18:18.134 "trsvcid": "4420" 00:18:18.134 }, 00:18:18.134 "peer_address": { 00:18:18.134 "trtype": "TCP", 00:18:18.134 "adrfam": "IPv4", 00:18:18.134 "traddr": "10.0.0.1", 00:18:18.134 "trsvcid": "39450" 00:18:18.134 }, 00:18:18.134 "auth": { 00:18:18.134 "state": "completed", 00:18:18.134 "digest": "sha256", 00:18:18.134 "dhgroup": "ffdhe8192" 00:18:18.134 } 00:18:18.134 } 00:18:18.134 ]' 00:18:18.134 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.392 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.392 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.392 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.392 21:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.392 21:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.392 21:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.392 21:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.651 21:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:18:19.220 21:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.220 21:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.220 21:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.220 21:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.220 21:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.220 21:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.220 21:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:19.220 21:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:19.481 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:19.481 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.481 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.481 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:19.481 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:19.481 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.481 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:19.481 21:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.481 21:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.481 21:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.481 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.481 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.053 00:18:20.053 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.053 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.053 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.053 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.053 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.053 21:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.053 21:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.053 21:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.053 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.053 { 00:18:20.053 "cntlid": 47, 00:18:20.053 "qid": 0, 00:18:20.053 "state": "enabled", 00:18:20.053 "thread": "nvmf_tgt_poll_group_000", 00:18:20.053 "listen_address": { 00:18:20.053 "trtype": "TCP", 00:18:20.053 "adrfam": "IPv4", 00:18:20.053 "traddr": "10.0.0.2", 00:18:20.053 "trsvcid": "4420" 00:18:20.053 }, 00:18:20.053 "peer_address": { 00:18:20.053 "trtype": "TCP", 00:18:20.053 "adrfam": "IPv4", 00:18:20.053 "traddr": "10.0.0.1", 00:18:20.053 "trsvcid": "51890" 00:18:20.053 }, 00:18:20.053 "auth": { 00:18:20.053 "state": "completed", 00:18:20.053 "digest": "sha256", 00:18:20.053 "dhgroup": "ffdhe8192" 00:18:20.053 } 00:18:20.053 } 00:18:20.053 ]' 00:18:20.053 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.313 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.313 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.313 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.313 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.313 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.313 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.313 21:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.572 21:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:18:21.141 21:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.141 21:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.141 21:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.141 21:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.141 21:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.141 21:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:21.141 21:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.141 21:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.141 21:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:21.141 21:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:21.401 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:21.401 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.401 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:21.401 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:21.401 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.401 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.401 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.401 21:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.401 21:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.401 21:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.401 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.401 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.661 00:18:21.661 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.661 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.661 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.661 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.661 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.661 21:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.661 21:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.661 21:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.661 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.661 { 00:18:21.661 "cntlid": 49, 00:18:21.661 "qid": 0, 00:18:21.661 "state": "enabled", 00:18:21.661 "thread": "nvmf_tgt_poll_group_000", 00:18:21.661 "listen_address": { 00:18:21.661 "trtype": "TCP", 00:18:21.661 "adrfam": "IPv4", 00:18:21.661 "traddr": "10.0.0.2", 00:18:21.661 "trsvcid": "4420" 00:18:21.661 }, 00:18:21.661 "peer_address": { 00:18:21.661 "trtype": "TCP", 00:18:21.661 "adrfam": "IPv4", 00:18:21.661 "traddr": "10.0.0.1", 00:18:21.661 "trsvcid": "51910" 00:18:21.662 }, 00:18:21.662 "auth": { 00:18:21.662 "state": "completed", 00:18:21.662 "digest": "sha384", 00:18:21.662 "dhgroup": "null" 00:18:21.662 } 00:18:21.662 } 00:18:21.662 ]' 00:18:21.922 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.922 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.922 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.922 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:21.922 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.922 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.922 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.922 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.183 21:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:18:22.756 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.756 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.756 21:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.756 21:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.756 21:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.756 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.756 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:22.756 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:23.017 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:23.017 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.017 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.017 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:23.017 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:23.017 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.017 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.017 21:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.017 21:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.017 21:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.017 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.017 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.278 00:18:23.278 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.278 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.278 21:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.278 21:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.278 21:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.278 21:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.278 21:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.278 21:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.278 21:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.278 { 00:18:23.278 "cntlid": 51, 00:18:23.278 "qid": 0, 00:18:23.278 "state": "enabled", 00:18:23.278 "thread": "nvmf_tgt_poll_group_000", 00:18:23.278 "listen_address": { 00:18:23.278 "trtype": "TCP", 00:18:23.278 "adrfam": "IPv4", 00:18:23.278 "traddr": "10.0.0.2", 00:18:23.278 "trsvcid": "4420" 00:18:23.278 }, 00:18:23.278 "peer_address": { 00:18:23.278 "trtype": "TCP", 00:18:23.278 "adrfam": "IPv4", 00:18:23.278 "traddr": "10.0.0.1", 00:18:23.278 "trsvcid": "51948" 00:18:23.278 }, 00:18:23.278 "auth": { 00:18:23.278 "state": "completed", 00:18:23.278 "digest": "sha384", 00:18:23.278 "dhgroup": "null" 00:18:23.278 } 00:18:23.278 } 00:18:23.278 ]' 00:18:23.278 21:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.540 21:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.540 21:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.540 21:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:23.540 21:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.540 21:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.540 21:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.540 21:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.800 21:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:18:24.370 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.371 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.371 21:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.371 21:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.371 21:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.371 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.371 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:24.371 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:24.631 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:24.631 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.631 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.631 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:24.631 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:24.631 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.631 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.631 21:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.631 21:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.631 21:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.631 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.631 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.892 00:18:24.892 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.892 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.892 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.153 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.153 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.153 21:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.153 21:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.153 21:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.153 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.153 { 00:18:25.153 "cntlid": 53, 00:18:25.153 "qid": 0, 00:18:25.153 "state": "enabled", 00:18:25.153 "thread": "nvmf_tgt_poll_group_000", 00:18:25.153 "listen_address": { 00:18:25.153 "trtype": "TCP", 00:18:25.153 "adrfam": "IPv4", 00:18:25.153 "traddr": "10.0.0.2", 00:18:25.153 "trsvcid": "4420" 00:18:25.153 }, 00:18:25.153 "peer_address": { 00:18:25.153 "trtype": "TCP", 00:18:25.153 "adrfam": "IPv4", 00:18:25.153 "traddr": "10.0.0.1", 00:18:25.153 "trsvcid": "51974" 00:18:25.153 }, 00:18:25.153 "auth": { 00:18:25.153 "state": "completed", 00:18:25.153 "digest": "sha384", 00:18:25.153 "dhgroup": "null" 00:18:25.153 } 00:18:25.153 } 00:18:25.153 ]' 00:18:25.153 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.153 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.153 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.153 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:25.153 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.153 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.153 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.153 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.414 21:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:18:25.986 21:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.986 21:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.986 21:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.986 21:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.986 21:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.986 21:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.986 21:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:25.986 21:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:26.247 21:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:26.247 21:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.247 21:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:26.247 21:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:26.247 21:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:26.247 21:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.247 21:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:26.247 21:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.247 21:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.247 21:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.247 21:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.247 21:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.508 00:18:26.508 21:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.508 21:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.508 21:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.508 21:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.508 21:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.508 21:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.508 21:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.508 21:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.508 21:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.508 { 00:18:26.508 "cntlid": 55, 00:18:26.508 "qid": 0, 00:18:26.508 "state": "enabled", 00:18:26.508 "thread": "nvmf_tgt_poll_group_000", 00:18:26.508 "listen_address": { 00:18:26.508 "trtype": "TCP", 00:18:26.508 "adrfam": "IPv4", 00:18:26.508 "traddr": "10.0.0.2", 00:18:26.508 "trsvcid": "4420" 00:18:26.508 }, 00:18:26.508 "peer_address": { 00:18:26.508 "trtype": "TCP", 00:18:26.508 "adrfam": "IPv4", 00:18:26.508 "traddr": "10.0.0.1", 00:18:26.508 "trsvcid": "52004" 00:18:26.508 }, 00:18:26.508 "auth": { 00:18:26.508 "state": "completed", 00:18:26.508 "digest": "sha384", 00:18:26.508 "dhgroup": "null" 00:18:26.508 } 00:18:26.508 } 00:18:26.509 ]' 00:18:26.509 21:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.770 21:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.770 21:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.770 21:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:26.770 21:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.770 21:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.770 21:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.770 21:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.040 21:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:18:27.617 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.617 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.617 21:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.617 21:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.617 21:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.617 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.617 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.617 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:27.617 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:27.877 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:27.877 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.877 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.877 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:27.877 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:27.877 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.877 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.877 21:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.877 21:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.877 21:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.877 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.877 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.877 00:18:27.877 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.877 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.877 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.138 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.138 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.138 21:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.138 21:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.138 21:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.138 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.138 { 00:18:28.138 "cntlid": 57, 00:18:28.138 "qid": 0, 00:18:28.138 "state": "enabled", 00:18:28.138 "thread": "nvmf_tgt_poll_group_000", 00:18:28.138 "listen_address": { 00:18:28.138 "trtype": "TCP", 00:18:28.138 "adrfam": "IPv4", 00:18:28.138 "traddr": "10.0.0.2", 00:18:28.138 "trsvcid": "4420" 00:18:28.138 }, 00:18:28.138 "peer_address": { 00:18:28.138 "trtype": "TCP", 00:18:28.138 "adrfam": "IPv4", 00:18:28.138 "traddr": "10.0.0.1", 00:18:28.138 "trsvcid": "50114" 00:18:28.138 }, 00:18:28.138 "auth": { 00:18:28.138 "state": "completed", 00:18:28.138 "digest": "sha384", 00:18:28.138 "dhgroup": "ffdhe2048" 00:18:28.138 } 00:18:28.138 } 00:18:28.138 ]' 00:18:28.138 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.138 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.138 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.138 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:28.138 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.399 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.399 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.399 21:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.399 21:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:18:29.344 21:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.344 21:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.344 21:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.344 21:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.344 21:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.344 21:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.344 21:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:29.344 21:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:29.344 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:29.344 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.344 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.344 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:29.344 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:29.344 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.344 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.344 21:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.344 21:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.344 21:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.344 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.344 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.605 00:18:29.605 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.605 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.605 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.866 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.866 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.866 21:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.866 21:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.866 21:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.866 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.866 { 00:18:29.866 "cntlid": 59, 00:18:29.866 "qid": 0, 00:18:29.866 "state": "enabled", 00:18:29.866 "thread": "nvmf_tgt_poll_group_000", 00:18:29.866 "listen_address": { 00:18:29.866 "trtype": "TCP", 00:18:29.866 "adrfam": "IPv4", 00:18:29.866 "traddr": "10.0.0.2", 00:18:29.866 "trsvcid": "4420" 00:18:29.866 }, 00:18:29.866 "peer_address": { 00:18:29.866 "trtype": "TCP", 00:18:29.866 "adrfam": "IPv4", 00:18:29.866 "traddr": "10.0.0.1", 00:18:29.866 "trsvcid": "50128" 00:18:29.866 }, 00:18:29.866 "auth": { 00:18:29.866 "state": "completed", 00:18:29.866 "digest": "sha384", 00:18:29.866 "dhgroup": "ffdhe2048" 00:18:29.866 } 00:18:29.866 } 00:18:29.866 ]' 00:18:29.866 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.866 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.866 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.866 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:29.866 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.866 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.866 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.866 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.127 21:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.071 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.332 00:18:31.332 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.332 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.332 21:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.631 21:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.631 21:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.631 21:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.631 21:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.631 21:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.631 21:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.631 { 00:18:31.631 "cntlid": 61, 00:18:31.631 "qid": 0, 00:18:31.631 "state": "enabled", 00:18:31.631 "thread": "nvmf_tgt_poll_group_000", 00:18:31.631 "listen_address": { 00:18:31.631 "trtype": "TCP", 00:18:31.631 "adrfam": "IPv4", 00:18:31.631 "traddr": "10.0.0.2", 00:18:31.631 "trsvcid": "4420" 00:18:31.631 }, 00:18:31.631 "peer_address": { 00:18:31.631 "trtype": "TCP", 00:18:31.631 "adrfam": "IPv4", 00:18:31.631 "traddr": "10.0.0.1", 00:18:31.631 "trsvcid": "50150" 00:18:31.631 }, 00:18:31.631 "auth": { 00:18:31.631 "state": "completed", 00:18:31.631 "digest": "sha384", 00:18:31.631 "dhgroup": "ffdhe2048" 00:18:31.631 } 00:18:31.631 } 00:18:31.631 ]' 00:18:31.631 21:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.631 21:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.631 21:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.631 21:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:31.631 21:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.631 21:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.631 21:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.631 21:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.894 21:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:18:32.466 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.466 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.466 21:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.466 21:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.466 21:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.466 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.466 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:32.466 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:32.727 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:32.727 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.727 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.727 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:32.727 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:32.727 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.727 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:32.727 21:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.727 21:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.727 21:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.727 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.727 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.988 00:18:32.988 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.988 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.988 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.248 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.248 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.248 21:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.248 21:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.248 21:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.249 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.249 { 00:18:33.249 "cntlid": 63, 00:18:33.249 "qid": 0, 00:18:33.249 "state": "enabled", 00:18:33.249 "thread": "nvmf_tgt_poll_group_000", 00:18:33.249 "listen_address": { 00:18:33.249 "trtype": "TCP", 00:18:33.249 "adrfam": "IPv4", 00:18:33.249 "traddr": "10.0.0.2", 00:18:33.249 "trsvcid": "4420" 00:18:33.249 }, 00:18:33.249 "peer_address": { 00:18:33.249 "trtype": "TCP", 00:18:33.249 "adrfam": "IPv4", 00:18:33.249 "traddr": "10.0.0.1", 00:18:33.249 "trsvcid": "50196" 00:18:33.249 }, 00:18:33.249 "auth": { 00:18:33.249 "state": "completed", 00:18:33.249 "digest": "sha384", 00:18:33.249 "dhgroup": "ffdhe2048" 00:18:33.249 } 00:18:33.249 } 00:18:33.249 ]' 00:18:33.249 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.249 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.249 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.249 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:33.249 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.249 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.249 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.249 21:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.509 21:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:18:34.080 21:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.080 21:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.080 21:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.080 21:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.080 21:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.080 21:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.080 21:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.080 21:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:34.080 21:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:34.341 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:34.341 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.341 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.341 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:34.341 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:34.341 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.341 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.341 21:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.341 21:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.341 21:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.341 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.341 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.602 00:18:34.602 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.602 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.602 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.862 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.862 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.862 21:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.862 21:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.862 21:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.862 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.862 { 00:18:34.862 "cntlid": 65, 00:18:34.862 "qid": 0, 00:18:34.862 "state": "enabled", 00:18:34.862 "thread": "nvmf_tgt_poll_group_000", 00:18:34.862 "listen_address": { 00:18:34.862 "trtype": "TCP", 00:18:34.862 "adrfam": "IPv4", 00:18:34.862 "traddr": "10.0.0.2", 00:18:34.862 "trsvcid": "4420" 00:18:34.862 }, 00:18:34.862 "peer_address": { 00:18:34.862 "trtype": "TCP", 00:18:34.862 "adrfam": "IPv4", 00:18:34.862 "traddr": "10.0.0.1", 00:18:34.862 "trsvcid": "50206" 00:18:34.862 }, 00:18:34.862 "auth": { 00:18:34.862 "state": "completed", 00:18:34.862 "digest": "sha384", 00:18:34.862 "dhgroup": "ffdhe3072" 00:18:34.862 } 00:18:34.862 } 00:18:34.862 ]' 00:18:34.862 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.862 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.862 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.862 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:34.862 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.862 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.862 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.862 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.122 21:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:18:35.695 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.957 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.217 00:18:36.217 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.217 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.217 21:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.477 21:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.477 21:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.477 21:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.477 21:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.477 21:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.477 21:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.477 { 00:18:36.477 "cntlid": 67, 00:18:36.477 "qid": 0, 00:18:36.477 "state": "enabled", 00:18:36.477 "thread": "nvmf_tgt_poll_group_000", 00:18:36.477 "listen_address": { 00:18:36.477 "trtype": "TCP", 00:18:36.477 "adrfam": "IPv4", 00:18:36.477 "traddr": "10.0.0.2", 00:18:36.477 "trsvcid": "4420" 00:18:36.477 }, 00:18:36.477 "peer_address": { 00:18:36.477 "trtype": "TCP", 00:18:36.477 "adrfam": "IPv4", 00:18:36.477 "traddr": "10.0.0.1", 00:18:36.477 "trsvcid": "50232" 00:18:36.477 }, 00:18:36.477 "auth": { 00:18:36.477 "state": "completed", 00:18:36.477 "digest": "sha384", 00:18:36.477 "dhgroup": "ffdhe3072" 00:18:36.477 } 00:18:36.477 } 00:18:36.477 ]' 00:18:36.477 21:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.477 21:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.477 21:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.477 21:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.477 21:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.477 21:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.477 21:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.477 21:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.737 21:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.680 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.941 00:18:37.941 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.941 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.941 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.202 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.202 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.202 21:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.202 21:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.202 21:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.202 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.202 { 00:18:38.202 "cntlid": 69, 00:18:38.202 "qid": 0, 00:18:38.202 "state": "enabled", 00:18:38.202 "thread": "nvmf_tgt_poll_group_000", 00:18:38.202 "listen_address": { 00:18:38.202 "trtype": "TCP", 00:18:38.202 "adrfam": "IPv4", 00:18:38.202 "traddr": "10.0.0.2", 00:18:38.202 "trsvcid": "4420" 00:18:38.202 }, 00:18:38.202 "peer_address": { 00:18:38.202 "trtype": "TCP", 00:18:38.202 "adrfam": "IPv4", 00:18:38.202 "traddr": "10.0.0.1", 00:18:38.202 "trsvcid": "49810" 00:18:38.202 }, 00:18:38.202 "auth": { 00:18:38.202 "state": "completed", 00:18:38.202 "digest": "sha384", 00:18:38.202 "dhgroup": "ffdhe3072" 00:18:38.202 } 00:18:38.202 } 00:18:38.202 ]' 00:18:38.202 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.202 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.202 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.202 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:38.202 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.202 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.202 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.202 21:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.462 21:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:18:39.032 21:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.293 21:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.293 21:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.293 21:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.293 21:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.293 21:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.293 21:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:39.293 21:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:39.293 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:39.293 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.293 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:39.293 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:39.293 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:39.293 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.293 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:39.293 21:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.293 21:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.293 21:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.293 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.293 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.554 00:18:39.554 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.554 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.554 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.815 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.815 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.815 21:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.815 21:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.815 21:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.815 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.815 { 00:18:39.815 "cntlid": 71, 00:18:39.815 "qid": 0, 00:18:39.815 "state": "enabled", 00:18:39.815 "thread": "nvmf_tgt_poll_group_000", 00:18:39.815 "listen_address": { 00:18:39.815 "trtype": "TCP", 00:18:39.815 "adrfam": "IPv4", 00:18:39.815 "traddr": "10.0.0.2", 00:18:39.815 "trsvcid": "4420" 00:18:39.815 }, 00:18:39.815 "peer_address": { 00:18:39.815 "trtype": "TCP", 00:18:39.815 "adrfam": "IPv4", 00:18:39.815 "traddr": "10.0.0.1", 00:18:39.815 "trsvcid": "49844" 00:18:39.815 }, 00:18:39.815 "auth": { 00:18:39.815 "state": "completed", 00:18:39.815 "digest": "sha384", 00:18:39.815 "dhgroup": "ffdhe3072" 00:18:39.815 } 00:18:39.815 } 00:18:39.815 ]' 00:18:39.815 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.815 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.815 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.815 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:39.815 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.815 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.815 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.815 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.077 21:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.037 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.296 00:18:41.296 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.296 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.296 21:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.556 21:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.556 21:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.556 21:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.556 21:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.556 21:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.556 21:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.556 { 00:18:41.556 "cntlid": 73, 00:18:41.556 "qid": 0, 00:18:41.556 "state": "enabled", 00:18:41.556 "thread": "nvmf_tgt_poll_group_000", 00:18:41.556 "listen_address": { 00:18:41.556 "trtype": "TCP", 00:18:41.556 "adrfam": "IPv4", 00:18:41.556 "traddr": "10.0.0.2", 00:18:41.556 "trsvcid": "4420" 00:18:41.556 }, 00:18:41.556 "peer_address": { 00:18:41.556 "trtype": "TCP", 00:18:41.556 "adrfam": "IPv4", 00:18:41.556 "traddr": "10.0.0.1", 00:18:41.556 "trsvcid": "49878" 00:18:41.556 }, 00:18:41.556 "auth": { 00:18:41.556 "state": "completed", 00:18:41.556 "digest": "sha384", 00:18:41.556 "dhgroup": "ffdhe4096" 00:18:41.556 } 00:18:41.556 } 00:18:41.556 ]' 00:18:41.556 21:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.556 21:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.556 21:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.556 21:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:41.556 21:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.556 21:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.556 21:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.556 21:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.816 21:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:18:42.384 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.644 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.644 21:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.644 21:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.644 21:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.644 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.644 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.644 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.644 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:42.644 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.644 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.644 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:42.644 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:42.644 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.644 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.644 21:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.644 21:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.645 21:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.645 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.645 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.904 00:18:42.904 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.904 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.904 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.164 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.164 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.164 21:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.164 21:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.164 21:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.164 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.164 { 00:18:43.164 "cntlid": 75, 00:18:43.164 "qid": 0, 00:18:43.164 "state": "enabled", 00:18:43.164 "thread": "nvmf_tgt_poll_group_000", 00:18:43.164 "listen_address": { 00:18:43.164 "trtype": "TCP", 00:18:43.164 "adrfam": "IPv4", 00:18:43.164 "traddr": "10.0.0.2", 00:18:43.164 "trsvcid": "4420" 00:18:43.164 }, 00:18:43.164 "peer_address": { 00:18:43.164 "trtype": "TCP", 00:18:43.164 "adrfam": "IPv4", 00:18:43.164 "traddr": "10.0.0.1", 00:18:43.164 "trsvcid": "49898" 00:18:43.164 }, 00:18:43.164 "auth": { 00:18:43.164 "state": "completed", 00:18:43.164 "digest": "sha384", 00:18:43.164 "dhgroup": "ffdhe4096" 00:18:43.164 } 00:18:43.164 } 00:18:43.164 ]' 00:18:43.164 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.164 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.164 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.164 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:43.164 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.424 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.424 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.424 21:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.424 21:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:18:44.365 21:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.365 21:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.365 21:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.365 21:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.365 21:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.365 21:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.365 21:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:44.365 21:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:44.365 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:44.365 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.365 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.365 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:44.365 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:44.365 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.365 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.365 21:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.365 21:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.365 21:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.365 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.365 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.626 00:18:44.626 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.626 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.626 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.886 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.886 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.886 21:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.886 21:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.886 21:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.886 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.886 { 00:18:44.886 "cntlid": 77, 00:18:44.886 "qid": 0, 00:18:44.886 "state": "enabled", 00:18:44.886 "thread": "nvmf_tgt_poll_group_000", 00:18:44.886 "listen_address": { 00:18:44.886 "trtype": "TCP", 00:18:44.886 "adrfam": "IPv4", 00:18:44.886 "traddr": "10.0.0.2", 00:18:44.886 "trsvcid": "4420" 00:18:44.886 }, 00:18:44.886 "peer_address": { 00:18:44.886 "trtype": "TCP", 00:18:44.886 "adrfam": "IPv4", 00:18:44.886 "traddr": "10.0.0.1", 00:18:44.886 "trsvcid": "49920" 00:18:44.886 }, 00:18:44.886 "auth": { 00:18:44.886 "state": "completed", 00:18:44.886 "digest": "sha384", 00:18:44.886 "dhgroup": "ffdhe4096" 00:18:44.886 } 00:18:44.886 } 00:18:44.886 ]' 00:18:44.886 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.886 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.887 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.887 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:44.887 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.887 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.887 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.887 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.147 21:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:46.089 21:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:46.365 00:18:46.365 21:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.365 21:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.365 21:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.673 21:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.673 21:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.673 21:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.673 21:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.673 21:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.673 21:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.673 { 00:18:46.673 "cntlid": 79, 00:18:46.673 "qid": 0, 00:18:46.673 "state": "enabled", 00:18:46.673 "thread": "nvmf_tgt_poll_group_000", 00:18:46.673 "listen_address": { 00:18:46.673 "trtype": "TCP", 00:18:46.673 "adrfam": "IPv4", 00:18:46.673 "traddr": "10.0.0.2", 00:18:46.673 "trsvcid": "4420" 00:18:46.673 }, 00:18:46.673 "peer_address": { 00:18:46.673 "trtype": "TCP", 00:18:46.673 "adrfam": "IPv4", 00:18:46.673 "traddr": "10.0.0.1", 00:18:46.673 "trsvcid": "49948" 00:18:46.673 }, 00:18:46.673 "auth": { 00:18:46.673 "state": "completed", 00:18:46.673 "digest": "sha384", 00:18:46.673 "dhgroup": "ffdhe4096" 00:18:46.673 } 00:18:46.673 } 00:18:46.673 ]' 00:18:46.673 21:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.673 21:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.673 21:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.673 21:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.673 21:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.673 21:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.673 21:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.673 21:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.934 21:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:18:47.503 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.503 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.503 21:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.503 21:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.503 21:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.503 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.503 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.503 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:47.503 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:47.763 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:47.763 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.763 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.763 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:47.763 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:47.763 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.763 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.763 21:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.763 21:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.763 21:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.763 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.763 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.023 00:18:48.023 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.023 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.023 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.284 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.284 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.284 21:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.284 21:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.284 21:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.284 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.284 { 00:18:48.284 "cntlid": 81, 00:18:48.284 "qid": 0, 00:18:48.284 "state": "enabled", 00:18:48.284 "thread": "nvmf_tgt_poll_group_000", 00:18:48.284 "listen_address": { 00:18:48.284 "trtype": "TCP", 00:18:48.284 "adrfam": "IPv4", 00:18:48.284 "traddr": "10.0.0.2", 00:18:48.284 "trsvcid": "4420" 00:18:48.284 }, 00:18:48.284 "peer_address": { 00:18:48.284 "trtype": "TCP", 00:18:48.284 "adrfam": "IPv4", 00:18:48.284 "traddr": "10.0.0.1", 00:18:48.284 "trsvcid": "35346" 00:18:48.284 }, 00:18:48.284 "auth": { 00:18:48.284 "state": "completed", 00:18:48.284 "digest": "sha384", 00:18:48.284 "dhgroup": "ffdhe6144" 00:18:48.284 } 00:18:48.284 } 00:18:48.284 ]' 00:18:48.284 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.284 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.284 21:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.284 21:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:48.284 21:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.284 21:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.284 21:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.284 21:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.545 21:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:18:49.486 21:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.486 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.747 00:18:49.748 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.748 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.748 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.007 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.007 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.007 21:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.007 21:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.007 21:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.007 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.007 { 00:18:50.007 "cntlid": 83, 00:18:50.007 "qid": 0, 00:18:50.007 "state": "enabled", 00:18:50.007 "thread": "nvmf_tgt_poll_group_000", 00:18:50.007 "listen_address": { 00:18:50.007 "trtype": "TCP", 00:18:50.007 "adrfam": "IPv4", 00:18:50.007 "traddr": "10.0.0.2", 00:18:50.007 "trsvcid": "4420" 00:18:50.007 }, 00:18:50.007 "peer_address": { 00:18:50.007 "trtype": "TCP", 00:18:50.007 "adrfam": "IPv4", 00:18:50.007 "traddr": "10.0.0.1", 00:18:50.007 "trsvcid": "35372" 00:18:50.007 }, 00:18:50.007 "auth": { 00:18:50.007 "state": "completed", 00:18:50.007 "digest": "sha384", 00:18:50.007 "dhgroup": "ffdhe6144" 00:18:50.007 } 00:18:50.007 } 00:18:50.007 ]' 00:18:50.007 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.007 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.007 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.007 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:50.007 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.267 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.267 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.267 21:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.267 21:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.206 21:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.775 00:18:51.775 21:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.775 21:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.775 21:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.775 21:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.775 21:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.775 21:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.775 21:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.775 21:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.775 21:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.775 { 00:18:51.775 "cntlid": 85, 00:18:51.775 "qid": 0, 00:18:51.775 "state": "enabled", 00:18:51.775 "thread": "nvmf_tgt_poll_group_000", 00:18:51.775 "listen_address": { 00:18:51.775 "trtype": "TCP", 00:18:51.775 "adrfam": "IPv4", 00:18:51.775 "traddr": "10.0.0.2", 00:18:51.775 "trsvcid": "4420" 00:18:51.775 }, 00:18:51.775 "peer_address": { 00:18:51.775 "trtype": "TCP", 00:18:51.775 "adrfam": "IPv4", 00:18:51.775 "traddr": "10.0.0.1", 00:18:51.775 "trsvcid": "35402" 00:18:51.775 }, 00:18:51.775 "auth": { 00:18:51.775 "state": "completed", 00:18:51.775 "digest": "sha384", 00:18:51.775 "dhgroup": "ffdhe6144" 00:18:51.775 } 00:18:51.775 } 00:18:51.775 ]' 00:18:51.775 21:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.775 21:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.775 21:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.036 21:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:52.036 21:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.036 21:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.036 21:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.036 21:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.036 21:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.975 21:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.546 00:18:53.546 21:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.546 21:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.546 21:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.546 21:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.546 21:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.546 21:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.546 21:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.546 21:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.546 21:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.546 { 00:18:53.546 "cntlid": 87, 00:18:53.546 "qid": 0, 00:18:53.546 "state": "enabled", 00:18:53.546 "thread": "nvmf_tgt_poll_group_000", 00:18:53.546 "listen_address": { 00:18:53.546 "trtype": "TCP", 00:18:53.546 "adrfam": "IPv4", 00:18:53.546 "traddr": "10.0.0.2", 00:18:53.546 "trsvcid": "4420" 00:18:53.546 }, 00:18:53.546 "peer_address": { 00:18:53.546 "trtype": "TCP", 00:18:53.546 "adrfam": "IPv4", 00:18:53.546 "traddr": "10.0.0.1", 00:18:53.546 "trsvcid": "35414" 00:18:53.546 }, 00:18:53.546 "auth": { 00:18:53.546 "state": "completed", 00:18:53.546 "digest": "sha384", 00:18:53.546 "dhgroup": "ffdhe6144" 00:18:53.546 } 00:18:53.546 } 00:18:53.546 ]' 00:18:53.546 21:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.546 21:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.546 21:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.807 21:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.807 21:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.807 21:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.807 21:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.807 21:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.807 21:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.749 21:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.321 00:18:55.321 21:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.321 21:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.321 21:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.582 21:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.582 21:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.582 21:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.582 21:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.582 21:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.582 21:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.582 { 00:18:55.582 "cntlid": 89, 00:18:55.582 "qid": 0, 00:18:55.582 "state": "enabled", 00:18:55.582 "thread": "nvmf_tgt_poll_group_000", 00:18:55.582 "listen_address": { 00:18:55.582 "trtype": "TCP", 00:18:55.582 "adrfam": "IPv4", 00:18:55.583 "traddr": "10.0.0.2", 00:18:55.583 "trsvcid": "4420" 00:18:55.583 }, 00:18:55.583 "peer_address": { 00:18:55.583 "trtype": "TCP", 00:18:55.583 "adrfam": "IPv4", 00:18:55.583 "traddr": "10.0.0.1", 00:18:55.583 "trsvcid": "35448" 00:18:55.583 }, 00:18:55.583 "auth": { 00:18:55.583 "state": "completed", 00:18:55.583 "digest": "sha384", 00:18:55.583 "dhgroup": "ffdhe8192" 00:18:55.583 } 00:18:55.583 } 00:18:55.583 ]' 00:18:55.583 21:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.583 21:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.583 21:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.583 21:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.583 21:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.583 21:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.583 21:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.583 21:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.843 21:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.784 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.354 00:18:57.354 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.354 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.354 21:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.354 21:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.354 21:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.354 21:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.354 21:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.354 21:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.354 21:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.354 { 00:18:57.354 "cntlid": 91, 00:18:57.354 "qid": 0, 00:18:57.354 "state": "enabled", 00:18:57.354 "thread": "nvmf_tgt_poll_group_000", 00:18:57.354 "listen_address": { 00:18:57.354 "trtype": "TCP", 00:18:57.354 "adrfam": "IPv4", 00:18:57.354 "traddr": "10.0.0.2", 00:18:57.354 "trsvcid": "4420" 00:18:57.354 }, 00:18:57.354 "peer_address": { 00:18:57.354 "trtype": "TCP", 00:18:57.354 "adrfam": "IPv4", 00:18:57.354 "traddr": "10.0.0.1", 00:18:57.354 "trsvcid": "35474" 00:18:57.354 }, 00:18:57.354 "auth": { 00:18:57.354 "state": "completed", 00:18:57.354 "digest": "sha384", 00:18:57.354 "dhgroup": "ffdhe8192" 00:18:57.354 } 00:18:57.354 } 00:18:57.354 ]' 00:18:57.354 21:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.614 21:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.614 21:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.615 21:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:57.615 21:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.615 21:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.615 21:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.615 21:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.875 21:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:18:58.445 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.445 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:58.445 21:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.445 21:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.445 21:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.445 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.445 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.445 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.707 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:58.707 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.707 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.707 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:58.707 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:58.707 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.707 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.707 21:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.707 21:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.707 21:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.707 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.707 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.277 00:18:59.277 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.277 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.277 21:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.277 21:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.277 21:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.277 21:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.277 21:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.277 21:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.277 21:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.277 { 00:18:59.277 "cntlid": 93, 00:18:59.277 "qid": 0, 00:18:59.277 "state": "enabled", 00:18:59.277 "thread": "nvmf_tgt_poll_group_000", 00:18:59.277 "listen_address": { 00:18:59.277 "trtype": "TCP", 00:18:59.277 "adrfam": "IPv4", 00:18:59.277 "traddr": "10.0.0.2", 00:18:59.277 "trsvcid": "4420" 00:18:59.277 }, 00:18:59.277 "peer_address": { 00:18:59.277 "trtype": "TCP", 00:18:59.277 "adrfam": "IPv4", 00:18:59.277 "traddr": "10.0.0.1", 00:18:59.277 "trsvcid": "39784" 00:18:59.277 }, 00:18:59.277 "auth": { 00:18:59.277 "state": "completed", 00:18:59.277 "digest": "sha384", 00:18:59.277 "dhgroup": "ffdhe8192" 00:18:59.277 } 00:18:59.277 } 00:18:59.277 ]' 00:18:59.277 21:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.536 21:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.536 21:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.536 21:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:59.536 21:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.536 21:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.536 21:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.536 21:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.795 21:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:19:00.363 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.363 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.363 21:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.363 21:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.363 21:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.363 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.363 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.363 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.622 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:00.622 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.622 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:00.622 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:00.622 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:00.622 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.622 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:00.622 21:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.622 21:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.622 21:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.622 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.622 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.190 00:19:01.190 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.190 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.190 21:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.453 21:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.453 21:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.453 21:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.453 21:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.453 21:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.453 21:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.453 { 00:19:01.453 "cntlid": 95, 00:19:01.453 "qid": 0, 00:19:01.453 "state": "enabled", 00:19:01.453 "thread": "nvmf_tgt_poll_group_000", 00:19:01.453 "listen_address": { 00:19:01.453 "trtype": "TCP", 00:19:01.453 "adrfam": "IPv4", 00:19:01.453 "traddr": "10.0.0.2", 00:19:01.453 "trsvcid": "4420" 00:19:01.453 }, 00:19:01.453 "peer_address": { 00:19:01.453 "trtype": "TCP", 00:19:01.453 "adrfam": "IPv4", 00:19:01.453 "traddr": "10.0.0.1", 00:19:01.453 "trsvcid": "39810" 00:19:01.453 }, 00:19:01.453 "auth": { 00:19:01.453 "state": "completed", 00:19:01.453 "digest": "sha384", 00:19:01.453 "dhgroup": "ffdhe8192" 00:19:01.453 } 00:19:01.453 } 00:19:01.453 ]' 00:19:01.453 21:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.453 21:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.453 21:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.453 21:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.453 21:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.453 21:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.453 21:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.453 21:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.775 21:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:19:02.345 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.345 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.345 21:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.345 21:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.345 21:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.345 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:02.345 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.345 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.345 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:02.345 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:02.606 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:02.606 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.606 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.606 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:02.606 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:02.606 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.606 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.606 21:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.606 21:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.606 21:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.606 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.606 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.867 00:19:02.867 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.867 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.867 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.867 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.867 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.867 21:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.867 21:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.129 21:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.129 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.129 { 00:19:03.129 "cntlid": 97, 00:19:03.129 "qid": 0, 00:19:03.129 "state": "enabled", 00:19:03.129 "thread": "nvmf_tgt_poll_group_000", 00:19:03.129 "listen_address": { 00:19:03.129 "trtype": "TCP", 00:19:03.129 "adrfam": "IPv4", 00:19:03.129 "traddr": "10.0.0.2", 00:19:03.129 "trsvcid": "4420" 00:19:03.129 }, 00:19:03.129 "peer_address": { 00:19:03.129 "trtype": "TCP", 00:19:03.129 "adrfam": "IPv4", 00:19:03.129 "traddr": "10.0.0.1", 00:19:03.129 "trsvcid": "39832" 00:19:03.129 }, 00:19:03.129 "auth": { 00:19:03.129 "state": "completed", 00:19:03.129 "digest": "sha512", 00:19:03.129 "dhgroup": "null" 00:19:03.129 } 00:19:03.129 } 00:19:03.129 ]' 00:19:03.129 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.129 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.129 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.129 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:03.129 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.129 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.129 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.129 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.389 21:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:19:03.977 21:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.977 21:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.977 21:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.977 21:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.977 21:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.977 21:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.977 21:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:03.977 21:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:04.237 21:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:04.237 21:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.237 21:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.237 21:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:04.237 21:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:04.237 21:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.237 21:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.237 21:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.237 21:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.237 21:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.237 21:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.237 21:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.498 00:19:04.498 21:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.498 21:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.498 21:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.783 21:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.783 21:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.783 21:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.783 21:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.783 21:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.783 21:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.783 { 00:19:04.783 "cntlid": 99, 00:19:04.783 "qid": 0, 00:19:04.783 "state": "enabled", 00:19:04.783 "thread": "nvmf_tgt_poll_group_000", 00:19:04.783 "listen_address": { 00:19:04.783 "trtype": "TCP", 00:19:04.783 "adrfam": "IPv4", 00:19:04.783 "traddr": "10.0.0.2", 00:19:04.783 "trsvcid": "4420" 00:19:04.783 }, 00:19:04.783 "peer_address": { 00:19:04.783 "trtype": "TCP", 00:19:04.783 "adrfam": "IPv4", 00:19:04.783 "traddr": "10.0.0.1", 00:19:04.783 "trsvcid": "39864" 00:19:04.783 }, 00:19:04.783 "auth": { 00:19:04.783 "state": "completed", 00:19:04.783 "digest": "sha512", 00:19:04.783 "dhgroup": "null" 00:19:04.783 } 00:19:04.783 } 00:19:04.783 ]' 00:19:04.783 21:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.783 21:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.783 21:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.783 21:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:04.783 21:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.783 21:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.783 21:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.783 21:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.044 21:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:19:05.616 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.877 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.138 00:19:06.138 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.138 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.138 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.398 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.398 21:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.398 21:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.398 21:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.398 21:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.398 21:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.398 { 00:19:06.398 "cntlid": 101, 00:19:06.398 "qid": 0, 00:19:06.398 "state": "enabled", 00:19:06.398 "thread": "nvmf_tgt_poll_group_000", 00:19:06.398 "listen_address": { 00:19:06.398 "trtype": "TCP", 00:19:06.398 "adrfam": "IPv4", 00:19:06.398 "traddr": "10.0.0.2", 00:19:06.398 "trsvcid": "4420" 00:19:06.398 }, 00:19:06.398 "peer_address": { 00:19:06.398 "trtype": "TCP", 00:19:06.398 "adrfam": "IPv4", 00:19:06.398 "traddr": "10.0.0.1", 00:19:06.398 "trsvcid": "39888" 00:19:06.398 }, 00:19:06.398 "auth": { 00:19:06.398 "state": "completed", 00:19:06.398 "digest": "sha512", 00:19:06.398 "dhgroup": "null" 00:19:06.398 } 00:19:06.398 } 00:19:06.398 ]' 00:19:06.398 21:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.398 21:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.398 21:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.398 21:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:06.398 21:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.398 21:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.398 21:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.398 21:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.659 21:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.601 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.862 00:19:07.862 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.862 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.862 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.862 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.862 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.862 21:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.862 21:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.862 21:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.862 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.862 { 00:19:07.862 "cntlid": 103, 00:19:07.862 "qid": 0, 00:19:07.862 "state": "enabled", 00:19:07.862 "thread": "nvmf_tgt_poll_group_000", 00:19:07.862 "listen_address": { 00:19:07.862 "trtype": "TCP", 00:19:07.862 "adrfam": "IPv4", 00:19:07.862 "traddr": "10.0.0.2", 00:19:07.862 "trsvcid": "4420" 00:19:07.862 }, 00:19:07.862 "peer_address": { 00:19:07.862 "trtype": "TCP", 00:19:07.862 "adrfam": "IPv4", 00:19:07.862 "traddr": "10.0.0.1", 00:19:07.862 "trsvcid": "39928" 00:19:07.862 }, 00:19:07.862 "auth": { 00:19:07.862 "state": "completed", 00:19:07.862 "digest": "sha512", 00:19:07.862 "dhgroup": "null" 00:19:07.862 } 00:19:07.862 } 00:19:07.862 ]' 00:19:07.862 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.122 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.122 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.122 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:08.122 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.122 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.122 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.122 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.123 21:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:19:09.064 21:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.064 21:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.064 21:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.064 21:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.064 21:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.064 21:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.064 21:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.064 21:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.064 21:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.326 21:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:09.326 21:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.326 21:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.326 21:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:09.326 21:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:09.326 21:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.326 21:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.326 21:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.326 21:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.326 21:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.326 21:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.326 21:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.326 00:19:09.326 21:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.326 21:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.326 21:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.586 21:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.586 21:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.586 21:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.586 21:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.586 21:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.586 21:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.586 { 00:19:09.586 "cntlid": 105, 00:19:09.586 "qid": 0, 00:19:09.586 "state": "enabled", 00:19:09.586 "thread": "nvmf_tgt_poll_group_000", 00:19:09.586 "listen_address": { 00:19:09.586 "trtype": "TCP", 00:19:09.586 "adrfam": "IPv4", 00:19:09.586 "traddr": "10.0.0.2", 00:19:09.586 "trsvcid": "4420" 00:19:09.586 }, 00:19:09.586 "peer_address": { 00:19:09.586 "trtype": "TCP", 00:19:09.586 "adrfam": "IPv4", 00:19:09.586 "traddr": "10.0.0.1", 00:19:09.586 "trsvcid": "34202" 00:19:09.586 }, 00:19:09.586 "auth": { 00:19:09.586 "state": "completed", 00:19:09.586 "digest": "sha512", 00:19:09.586 "dhgroup": "ffdhe2048" 00:19:09.586 } 00:19:09.586 } 00:19:09.586 ]' 00:19:09.586 21:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.586 21:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.586 21:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.586 21:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:09.586 21:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.846 21:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.846 21:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.846 21:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.846 21:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.789 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.049 00:19:11.049 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.049 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.049 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.310 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.310 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.310 21:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.310 21:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.310 21:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.310 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.310 { 00:19:11.310 "cntlid": 107, 00:19:11.310 "qid": 0, 00:19:11.310 "state": "enabled", 00:19:11.310 "thread": "nvmf_tgt_poll_group_000", 00:19:11.310 "listen_address": { 00:19:11.310 "trtype": "TCP", 00:19:11.310 "adrfam": "IPv4", 00:19:11.310 "traddr": "10.0.0.2", 00:19:11.310 "trsvcid": "4420" 00:19:11.310 }, 00:19:11.310 "peer_address": { 00:19:11.310 "trtype": "TCP", 00:19:11.310 "adrfam": "IPv4", 00:19:11.310 "traddr": "10.0.0.1", 00:19:11.310 "trsvcid": "34246" 00:19:11.310 }, 00:19:11.310 "auth": { 00:19:11.310 "state": "completed", 00:19:11.310 "digest": "sha512", 00:19:11.310 "dhgroup": "ffdhe2048" 00:19:11.310 } 00:19:11.310 } 00:19:11.310 ]' 00:19:11.310 21:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.310 21:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.310 21:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.311 21:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.311 21:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.311 21:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.311 21:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.311 21:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.571 21:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:19:12.511 21:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.511 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.773 00:19:12.773 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.773 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.773 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.034 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.034 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.034 21:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.034 21:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.034 21:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.034 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.034 { 00:19:13.034 "cntlid": 109, 00:19:13.034 "qid": 0, 00:19:13.034 "state": "enabled", 00:19:13.034 "thread": "nvmf_tgt_poll_group_000", 00:19:13.034 "listen_address": { 00:19:13.034 "trtype": "TCP", 00:19:13.034 "adrfam": "IPv4", 00:19:13.034 "traddr": "10.0.0.2", 00:19:13.034 "trsvcid": "4420" 00:19:13.034 }, 00:19:13.034 "peer_address": { 00:19:13.034 "trtype": "TCP", 00:19:13.034 "adrfam": "IPv4", 00:19:13.034 "traddr": "10.0.0.1", 00:19:13.034 "trsvcid": "34282" 00:19:13.034 }, 00:19:13.034 "auth": { 00:19:13.034 "state": "completed", 00:19:13.034 "digest": "sha512", 00:19:13.034 "dhgroup": "ffdhe2048" 00:19:13.034 } 00:19:13.034 } 00:19:13.034 ]' 00:19:13.034 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.034 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.034 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.034 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.034 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.034 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.034 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.034 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.295 21:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:19:13.866 21:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.127 21:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.388 00:19:14.388 21:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.388 21:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.388 21:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.649 21:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.649 21:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.649 21:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.649 21:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.649 21:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.649 21:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.649 { 00:19:14.649 "cntlid": 111, 00:19:14.649 "qid": 0, 00:19:14.649 "state": "enabled", 00:19:14.649 "thread": "nvmf_tgt_poll_group_000", 00:19:14.649 "listen_address": { 00:19:14.649 "trtype": "TCP", 00:19:14.649 "adrfam": "IPv4", 00:19:14.649 "traddr": "10.0.0.2", 00:19:14.649 "trsvcid": "4420" 00:19:14.649 }, 00:19:14.649 "peer_address": { 00:19:14.649 "trtype": "TCP", 00:19:14.649 "adrfam": "IPv4", 00:19:14.649 "traddr": "10.0.0.1", 00:19:14.649 "trsvcid": "34296" 00:19:14.649 }, 00:19:14.649 "auth": { 00:19:14.649 "state": "completed", 00:19:14.649 "digest": "sha512", 00:19:14.649 "dhgroup": "ffdhe2048" 00:19:14.649 } 00:19:14.649 } 00:19:14.649 ]' 00:19:14.649 21:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.649 21:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.649 21:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.649 21:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.649 21:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.649 21:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.649 21:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.649 21:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.909 21:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:19:15.849 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.849 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:15.849 21:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.849 21:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.849 21:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.850 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.850 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.850 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:15.850 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:15.850 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:15.850 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.850 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.850 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:15.850 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:15.850 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.850 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.850 21:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.850 21:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.850 21:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.850 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.850 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.112 00:19:16.112 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.112 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.112 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.112 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.112 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.112 21:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.112 21:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.112 21:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.112 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.112 { 00:19:16.112 "cntlid": 113, 00:19:16.112 "qid": 0, 00:19:16.112 "state": "enabled", 00:19:16.112 "thread": "nvmf_tgt_poll_group_000", 00:19:16.112 "listen_address": { 00:19:16.112 "trtype": "TCP", 00:19:16.112 "adrfam": "IPv4", 00:19:16.112 "traddr": "10.0.0.2", 00:19:16.112 "trsvcid": "4420" 00:19:16.112 }, 00:19:16.112 "peer_address": { 00:19:16.112 "trtype": "TCP", 00:19:16.112 "adrfam": "IPv4", 00:19:16.112 "traddr": "10.0.0.1", 00:19:16.112 "trsvcid": "34314" 00:19:16.112 }, 00:19:16.112 "auth": { 00:19:16.112 "state": "completed", 00:19:16.112 "digest": "sha512", 00:19:16.112 "dhgroup": "ffdhe3072" 00:19:16.112 } 00:19:16.112 } 00:19:16.112 ]' 00:19:16.112 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.381 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.381 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.381 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:16.381 21:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.381 21:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.381 21:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.381 21:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.675 21:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:19:17.247 21:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.247 21:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.247 21:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.247 21:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.247 21:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.247 21:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.247 21:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.247 21:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.508 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:17.508 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.508 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:17.508 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:17.508 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:17.508 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.508 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.508 21:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.508 21:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.508 21:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.508 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.508 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.769 00:19:17.769 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.769 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.769 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.769 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.769 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.769 21:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.769 21:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.769 21:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.769 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.769 { 00:19:17.769 "cntlid": 115, 00:19:17.769 "qid": 0, 00:19:17.769 "state": "enabled", 00:19:17.769 "thread": "nvmf_tgt_poll_group_000", 00:19:17.769 "listen_address": { 00:19:17.769 "trtype": "TCP", 00:19:17.769 "adrfam": "IPv4", 00:19:17.769 "traddr": "10.0.0.2", 00:19:17.769 "trsvcid": "4420" 00:19:17.769 }, 00:19:17.769 "peer_address": { 00:19:17.769 "trtype": "TCP", 00:19:17.769 "adrfam": "IPv4", 00:19:17.769 "traddr": "10.0.0.1", 00:19:17.769 "trsvcid": "34332" 00:19:17.769 }, 00:19:17.769 "auth": { 00:19:17.769 "state": "completed", 00:19:17.769 "digest": "sha512", 00:19:17.769 "dhgroup": "ffdhe3072" 00:19:17.769 } 00:19:17.769 } 00:19:17.769 ]' 00:19:17.769 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.031 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.031 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.031 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.031 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.031 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.031 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.031 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.291 21:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:19:18.861 21:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.861 21:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.861 21:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.861 21:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.861 21:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.861 21:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.861 21:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.861 21:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.121 21:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:19.121 21:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.121 21:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.121 21:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:19.121 21:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:19.121 21:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.121 21:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.121 21:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.121 21:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.121 21:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.122 21:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.122 21:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.382 00:19:19.382 21:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.382 21:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.382 21:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.641 21:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.641 21:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.641 21:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.641 21:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.641 21:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.641 21:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.641 { 00:19:19.641 "cntlid": 117, 00:19:19.641 "qid": 0, 00:19:19.641 "state": "enabled", 00:19:19.641 "thread": "nvmf_tgt_poll_group_000", 00:19:19.642 "listen_address": { 00:19:19.642 "trtype": "TCP", 00:19:19.642 "adrfam": "IPv4", 00:19:19.642 "traddr": "10.0.0.2", 00:19:19.642 "trsvcid": "4420" 00:19:19.642 }, 00:19:19.642 "peer_address": { 00:19:19.642 "trtype": "TCP", 00:19:19.642 "adrfam": "IPv4", 00:19:19.642 "traddr": "10.0.0.1", 00:19:19.642 "trsvcid": "41742" 00:19:19.642 }, 00:19:19.642 "auth": { 00:19:19.642 "state": "completed", 00:19:19.642 "digest": "sha512", 00:19:19.642 "dhgroup": "ffdhe3072" 00:19:19.642 } 00:19:19.642 } 00:19:19.642 ]' 00:19:19.642 21:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.642 21:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.642 21:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.642 21:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.642 21:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.642 21:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.642 21:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.642 21:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.901 21:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:19:20.472 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.732 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.992 00:19:20.992 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.992 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.992 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.252 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.252 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.252 21:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.252 21:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.252 21:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.252 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.252 { 00:19:21.252 "cntlid": 119, 00:19:21.252 "qid": 0, 00:19:21.252 "state": "enabled", 00:19:21.252 "thread": "nvmf_tgt_poll_group_000", 00:19:21.252 "listen_address": { 00:19:21.252 "trtype": "TCP", 00:19:21.252 "adrfam": "IPv4", 00:19:21.252 "traddr": "10.0.0.2", 00:19:21.252 "trsvcid": "4420" 00:19:21.252 }, 00:19:21.252 "peer_address": { 00:19:21.252 "trtype": "TCP", 00:19:21.252 "adrfam": "IPv4", 00:19:21.252 "traddr": "10.0.0.1", 00:19:21.252 "trsvcid": "41776" 00:19:21.252 }, 00:19:21.252 "auth": { 00:19:21.252 "state": "completed", 00:19:21.252 "digest": "sha512", 00:19:21.252 "dhgroup": "ffdhe3072" 00:19:21.252 } 00:19:21.252 } 00:19:21.252 ]' 00:19:21.252 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.252 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.252 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.252 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.252 21:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.252 21:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.252 21:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.252 21:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.511 21:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:19:22.451 21:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.451 21:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.451 21:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.451 21:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.451 21:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.451 21:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.451 21:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.451 21:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:22.451 21:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:22.451 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:22.451 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.451 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.451 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:22.451 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:22.451 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.451 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.451 21:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.451 21:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.451 21:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.451 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.451 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.711 00:19:22.711 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.711 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.711 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.972 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.972 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.972 21:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.972 21:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.972 21:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.972 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.972 { 00:19:22.972 "cntlid": 121, 00:19:22.972 "qid": 0, 00:19:22.972 "state": "enabled", 00:19:22.972 "thread": "nvmf_tgt_poll_group_000", 00:19:22.972 "listen_address": { 00:19:22.972 "trtype": "TCP", 00:19:22.972 "adrfam": "IPv4", 00:19:22.972 "traddr": "10.0.0.2", 00:19:22.972 "trsvcid": "4420" 00:19:22.972 }, 00:19:22.972 "peer_address": { 00:19:22.972 "trtype": "TCP", 00:19:22.972 "adrfam": "IPv4", 00:19:22.972 "traddr": "10.0.0.1", 00:19:22.972 "trsvcid": "41800" 00:19:22.972 }, 00:19:22.972 "auth": { 00:19:22.972 "state": "completed", 00:19:22.972 "digest": "sha512", 00:19:22.972 "dhgroup": "ffdhe4096" 00:19:22.972 } 00:19:22.972 } 00:19:22.972 ]' 00:19:22.972 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.972 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.972 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.972 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.972 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.972 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.972 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.972 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.232 21:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:19:23.803 21:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.063 21:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.322 00:19:24.322 21:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.322 21:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.322 21:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.583 21:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.583 21:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.583 21:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.583 21:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.583 21:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.583 21:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.583 { 00:19:24.583 "cntlid": 123, 00:19:24.583 "qid": 0, 00:19:24.583 "state": "enabled", 00:19:24.583 "thread": "nvmf_tgt_poll_group_000", 00:19:24.583 "listen_address": { 00:19:24.583 "trtype": "TCP", 00:19:24.583 "adrfam": "IPv4", 00:19:24.583 "traddr": "10.0.0.2", 00:19:24.583 "trsvcid": "4420" 00:19:24.583 }, 00:19:24.583 "peer_address": { 00:19:24.583 "trtype": "TCP", 00:19:24.583 "adrfam": "IPv4", 00:19:24.583 "traddr": "10.0.0.1", 00:19:24.583 "trsvcid": "41820" 00:19:24.583 }, 00:19:24.583 "auth": { 00:19:24.583 "state": "completed", 00:19:24.583 "digest": "sha512", 00:19:24.583 "dhgroup": "ffdhe4096" 00:19:24.583 } 00:19:24.583 } 00:19:24.583 ]' 00:19:24.583 21:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.583 21:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.583 21:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.583 21:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:24.583 21:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.583 21:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.583 21:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.583 21:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.844 21:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.787 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.049 00:19:26.049 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.049 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.049 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.310 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.310 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.310 21:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.310 21:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.310 21:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.310 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.310 { 00:19:26.310 "cntlid": 125, 00:19:26.310 "qid": 0, 00:19:26.310 "state": "enabled", 00:19:26.310 "thread": "nvmf_tgt_poll_group_000", 00:19:26.310 "listen_address": { 00:19:26.310 "trtype": "TCP", 00:19:26.310 "adrfam": "IPv4", 00:19:26.310 "traddr": "10.0.0.2", 00:19:26.310 "trsvcid": "4420" 00:19:26.310 }, 00:19:26.310 "peer_address": { 00:19:26.310 "trtype": "TCP", 00:19:26.310 "adrfam": "IPv4", 00:19:26.310 "traddr": "10.0.0.1", 00:19:26.310 "trsvcid": "41844" 00:19:26.310 }, 00:19:26.310 "auth": { 00:19:26.310 "state": "completed", 00:19:26.310 "digest": "sha512", 00:19:26.310 "dhgroup": "ffdhe4096" 00:19:26.310 } 00:19:26.310 } 00:19:26.310 ]' 00:19:26.310 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.310 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.310 21:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.310 21:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.310 21:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.310 21:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.310 21:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.310 21:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.569 21:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:19:27.508 21:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.508 21:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.509 21:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.509 21:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.509 21:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.509 21:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.509 21:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.509 21:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.509 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:27.509 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.509 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.509 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:27.509 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:27.509 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.509 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:27.509 21:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.509 21:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.509 21:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.509 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.509 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.769 00:19:27.769 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.769 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.769 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.769 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.769 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.769 21:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.769 21:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.769 21:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.769 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.769 { 00:19:27.769 "cntlid": 127, 00:19:27.769 "qid": 0, 00:19:27.769 "state": "enabled", 00:19:27.769 "thread": "nvmf_tgt_poll_group_000", 00:19:27.769 "listen_address": { 00:19:27.769 "trtype": "TCP", 00:19:27.769 "adrfam": "IPv4", 00:19:27.769 "traddr": "10.0.0.2", 00:19:27.769 "trsvcid": "4420" 00:19:27.769 }, 00:19:27.769 "peer_address": { 00:19:27.769 "trtype": "TCP", 00:19:27.769 "adrfam": "IPv4", 00:19:27.769 "traddr": "10.0.0.1", 00:19:27.769 "trsvcid": "41884" 00:19:27.769 }, 00:19:27.769 "auth": { 00:19:27.769 "state": "completed", 00:19:27.769 "digest": "sha512", 00:19:27.769 "dhgroup": "ffdhe4096" 00:19:27.769 } 00:19:27.769 } 00:19:27.769 ]' 00:19:27.769 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.029 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.029 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.029 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.029 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.029 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.029 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.029 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.289 21:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:19:28.859 21:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.859 21:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:28.859 21:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.859 21:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.859 21:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.859 21:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.859 21:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.859 21:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:28.859 21:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.120 21:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:29.120 21:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.120 21:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.120 21:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:29.120 21:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:29.120 21:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.120 21:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.120 21:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.120 21:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.120 21:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.120 21:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.120 21:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.380 00:19:29.380 21:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.380 21:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.380 21:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.640 21:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.640 21:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.640 21:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.640 21:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.640 21:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.640 21:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.640 { 00:19:29.640 "cntlid": 129, 00:19:29.640 "qid": 0, 00:19:29.640 "state": "enabled", 00:19:29.640 "thread": "nvmf_tgt_poll_group_000", 00:19:29.640 "listen_address": { 00:19:29.640 "trtype": "TCP", 00:19:29.640 "adrfam": "IPv4", 00:19:29.640 "traddr": "10.0.0.2", 00:19:29.640 "trsvcid": "4420" 00:19:29.640 }, 00:19:29.640 "peer_address": { 00:19:29.640 "trtype": "TCP", 00:19:29.640 "adrfam": "IPv4", 00:19:29.640 "traddr": "10.0.0.1", 00:19:29.640 "trsvcid": "41920" 00:19:29.640 }, 00:19:29.640 "auth": { 00:19:29.640 "state": "completed", 00:19:29.640 "digest": "sha512", 00:19:29.640 "dhgroup": "ffdhe6144" 00:19:29.640 } 00:19:29.640 } 00:19:29.640 ]' 00:19:29.640 21:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.640 21:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.640 21:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.640 21:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:29.640 21:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.900 21:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.900 21:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.900 21:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.900 21:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:19:30.849 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.849 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.849 21:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.849 21:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.849 21:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.850 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.850 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:30.850 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:30.850 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:30.850 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.850 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.850 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:30.850 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:30.850 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.850 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.850 21:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.850 21:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.850 21:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.850 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.850 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.150 00:19:31.410 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.410 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.410 21:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.410 21:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.410 21:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.410 21:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.410 21:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.410 21:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.410 21:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.410 { 00:19:31.410 "cntlid": 131, 00:19:31.410 "qid": 0, 00:19:31.410 "state": "enabled", 00:19:31.410 "thread": "nvmf_tgt_poll_group_000", 00:19:31.410 "listen_address": { 00:19:31.410 "trtype": "TCP", 00:19:31.410 "adrfam": "IPv4", 00:19:31.410 "traddr": "10.0.0.2", 00:19:31.410 "trsvcid": "4420" 00:19:31.410 }, 00:19:31.410 "peer_address": { 00:19:31.410 "trtype": "TCP", 00:19:31.410 "adrfam": "IPv4", 00:19:31.410 "traddr": "10.0.0.1", 00:19:31.410 "trsvcid": "41952" 00:19:31.410 }, 00:19:31.410 "auth": { 00:19:31.410 "state": "completed", 00:19:31.410 "digest": "sha512", 00:19:31.410 "dhgroup": "ffdhe6144" 00:19:31.410 } 00:19:31.410 } 00:19:31.410 ]' 00:19:31.410 21:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.410 21:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.410 21:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.670 21:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:31.670 21:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.670 21:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.670 21:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.670 21:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.670 21:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.611 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.182 00:19:33.182 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.182 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.182 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.182 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.182 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.182 21:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.182 21:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.182 21:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.182 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.182 { 00:19:33.182 "cntlid": 133, 00:19:33.182 "qid": 0, 00:19:33.182 "state": "enabled", 00:19:33.182 "thread": "nvmf_tgt_poll_group_000", 00:19:33.182 "listen_address": { 00:19:33.182 "trtype": "TCP", 00:19:33.182 "adrfam": "IPv4", 00:19:33.182 "traddr": "10.0.0.2", 00:19:33.182 "trsvcid": "4420" 00:19:33.182 }, 00:19:33.182 "peer_address": { 00:19:33.182 "trtype": "TCP", 00:19:33.182 "adrfam": "IPv4", 00:19:33.182 "traddr": "10.0.0.1", 00:19:33.182 "trsvcid": "41982" 00:19:33.182 }, 00:19:33.182 "auth": { 00:19:33.182 "state": "completed", 00:19:33.182 "digest": "sha512", 00:19:33.182 "dhgroup": "ffdhe6144" 00:19:33.182 } 00:19:33.182 } 00:19:33.182 ]' 00:19:33.182 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.182 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.182 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.443 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.443 21:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.443 21:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.443 21:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.443 21:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.443 21:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:19:34.383 21:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.383 21:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.383 21:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.383 21:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.383 21:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.383 21:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.383 21:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:34.383 21:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:34.383 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:34.383 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.383 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.383 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:34.383 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:34.383 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.383 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:34.383 21:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.383 21:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.383 21:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.383 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.383 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.644 00:19:34.904 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.904 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.904 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.904 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.904 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.904 21:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.904 21:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.904 21:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.904 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.904 { 00:19:34.904 "cntlid": 135, 00:19:34.904 "qid": 0, 00:19:34.904 "state": "enabled", 00:19:34.904 "thread": "nvmf_tgt_poll_group_000", 00:19:34.904 "listen_address": { 00:19:34.904 "trtype": "TCP", 00:19:34.904 "adrfam": "IPv4", 00:19:34.904 "traddr": "10.0.0.2", 00:19:34.904 "trsvcid": "4420" 00:19:34.904 }, 00:19:34.904 "peer_address": { 00:19:34.904 "trtype": "TCP", 00:19:34.904 "adrfam": "IPv4", 00:19:34.904 "traddr": "10.0.0.1", 00:19:34.904 "trsvcid": "42018" 00:19:34.904 }, 00:19:34.904 "auth": { 00:19:34.904 "state": "completed", 00:19:34.904 "digest": "sha512", 00:19:34.904 "dhgroup": "ffdhe6144" 00:19:34.904 } 00:19:34.904 } 00:19:34.904 ]' 00:19:34.904 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.904 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.904 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.165 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.165 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.165 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.165 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.165 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.165 21:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.109 21:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.679 00:19:36.679 21:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.679 21:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.679 21:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.939 21:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.939 21:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.939 21:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.939 21:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.939 21:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.939 21:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.939 { 00:19:36.939 "cntlid": 137, 00:19:36.939 "qid": 0, 00:19:36.940 "state": "enabled", 00:19:36.940 "thread": "nvmf_tgt_poll_group_000", 00:19:36.940 "listen_address": { 00:19:36.940 "trtype": "TCP", 00:19:36.940 "adrfam": "IPv4", 00:19:36.940 "traddr": "10.0.0.2", 00:19:36.940 "trsvcid": "4420" 00:19:36.940 }, 00:19:36.940 "peer_address": { 00:19:36.940 "trtype": "TCP", 00:19:36.940 "adrfam": "IPv4", 00:19:36.940 "traddr": "10.0.0.1", 00:19:36.940 "trsvcid": "42048" 00:19:36.940 }, 00:19:36.940 "auth": { 00:19:36.940 "state": "completed", 00:19:36.940 "digest": "sha512", 00:19:36.940 "dhgroup": "ffdhe8192" 00:19:36.940 } 00:19:36.940 } 00:19:36.940 ]' 00:19:36.940 21:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.940 21:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.940 21:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.940 21:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:36.940 21:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.940 21:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.940 21:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.940 21:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.200 21:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.146 21:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.716 00:19:38.716 21:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.716 21:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.716 21:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.716 21:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.716 21:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.716 21:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.716 21:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.716 21:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.716 21:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.716 { 00:19:38.716 "cntlid": 139, 00:19:38.716 "qid": 0, 00:19:38.716 "state": "enabled", 00:19:38.716 "thread": "nvmf_tgt_poll_group_000", 00:19:38.716 "listen_address": { 00:19:38.716 "trtype": "TCP", 00:19:38.716 "adrfam": "IPv4", 00:19:38.716 "traddr": "10.0.0.2", 00:19:38.716 "trsvcid": "4420" 00:19:38.716 }, 00:19:38.716 "peer_address": { 00:19:38.716 "trtype": "TCP", 00:19:38.716 "adrfam": "IPv4", 00:19:38.716 "traddr": "10.0.0.1", 00:19:38.716 "trsvcid": "54752" 00:19:38.716 }, 00:19:38.716 "auth": { 00:19:38.716 "state": "completed", 00:19:38.716 "digest": "sha512", 00:19:38.716 "dhgroup": "ffdhe8192" 00:19:38.716 } 00:19:38.716 } 00:19:38.716 ]' 00:19:38.716 21:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.975 21:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.976 21:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.976 21:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.976 21:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.976 21:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.976 21:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.976 21:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.976 21:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:NzE5OTUzOWZhZGU4MjJjY2Y1MmI4NGUwZjQ5NmIyNmRL0w8H: --dhchap-ctrl-secret DHHC-1:02:MTk1ZTYwODI2NjhlYzA1Y2ZlODRkNzZhNmZlNWFhZmQ0MjIwMTE3YTM4YzgwM2ZkA6ygbQ==: 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.916 21:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.486 00:19:40.486 21:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.486 21:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.486 21:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.746 21:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.746 21:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.746 21:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.746 21:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.746 21:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.746 21:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.746 { 00:19:40.746 "cntlid": 141, 00:19:40.746 "qid": 0, 00:19:40.746 "state": "enabled", 00:19:40.746 "thread": "nvmf_tgt_poll_group_000", 00:19:40.746 "listen_address": { 00:19:40.746 "trtype": "TCP", 00:19:40.746 "adrfam": "IPv4", 00:19:40.746 "traddr": "10.0.0.2", 00:19:40.746 "trsvcid": "4420" 00:19:40.746 }, 00:19:40.746 "peer_address": { 00:19:40.746 "trtype": "TCP", 00:19:40.746 "adrfam": "IPv4", 00:19:40.746 "traddr": "10.0.0.1", 00:19:40.746 "trsvcid": "54776" 00:19:40.746 }, 00:19:40.746 "auth": { 00:19:40.746 "state": "completed", 00:19:40.746 "digest": "sha512", 00:19:40.746 "dhgroup": "ffdhe8192" 00:19:40.746 } 00:19:40.746 } 00:19:40.746 ]' 00:19:40.746 21:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.746 21:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.746 21:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.746 21:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.746 21:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.746 21:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.746 21:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.746 21:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.007 21:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZmVjOTE0NDk4MjM3ZGZkYzJmMDU1YTM4YmZjZmRlZWVmNWQ3YjE4ODBmZmMzNmM0i/FJ7w==: --dhchap-ctrl-secret DHHC-1:01:ZmQ1YjZmMTY4MWQxMTBiZDVhYTFkOGM0YTMxYmQ3ODc0c4nb: 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.948 21:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.518 00:19:42.519 21:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.519 21:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.519 21:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.779 21:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.779 21:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.779 21:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.779 21:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.779 21:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.779 21:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.779 { 00:19:42.779 "cntlid": 143, 00:19:42.779 "qid": 0, 00:19:42.780 "state": "enabled", 00:19:42.780 "thread": "nvmf_tgt_poll_group_000", 00:19:42.780 "listen_address": { 00:19:42.780 "trtype": "TCP", 00:19:42.780 "adrfam": "IPv4", 00:19:42.780 "traddr": "10.0.0.2", 00:19:42.780 "trsvcid": "4420" 00:19:42.780 }, 00:19:42.780 "peer_address": { 00:19:42.780 "trtype": "TCP", 00:19:42.780 "adrfam": "IPv4", 00:19:42.780 "traddr": "10.0.0.1", 00:19:42.780 "trsvcid": "54798" 00:19:42.780 }, 00:19:42.780 "auth": { 00:19:42.780 "state": "completed", 00:19:42.780 "digest": "sha512", 00:19:42.780 "dhgroup": "ffdhe8192" 00:19:42.780 } 00:19:42.780 } 00:19:42.780 ]' 00:19:42.780 21:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.780 21:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.780 21:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.780 21:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:42.780 21:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.780 21:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.780 21:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.780 21:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.040 21:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:19:43.626 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.626 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:43.626 21:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.626 21:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.626 21:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.626 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:43.627 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:43.627 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:43.627 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:43.627 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:43.627 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:43.895 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:43.895 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.895 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.895 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:43.895 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:43.895 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.895 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.895 21:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.895 21:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.895 21:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.895 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.895 21:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.465 00:19:44.465 21:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.465 21:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.465 21:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.465 21:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.725 21:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.725 21:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.725 21:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.725 21:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.725 21:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.725 { 00:19:44.725 "cntlid": 145, 00:19:44.725 "qid": 0, 00:19:44.725 "state": "enabled", 00:19:44.725 "thread": "nvmf_tgt_poll_group_000", 00:19:44.725 "listen_address": { 00:19:44.725 "trtype": "TCP", 00:19:44.725 "adrfam": "IPv4", 00:19:44.725 "traddr": "10.0.0.2", 00:19:44.725 "trsvcid": "4420" 00:19:44.725 }, 00:19:44.725 "peer_address": { 00:19:44.725 "trtype": "TCP", 00:19:44.725 "adrfam": "IPv4", 00:19:44.725 "traddr": "10.0.0.1", 00:19:44.725 "trsvcid": "54830" 00:19:44.725 }, 00:19:44.725 "auth": { 00:19:44.725 "state": "completed", 00:19:44.725 "digest": "sha512", 00:19:44.725 "dhgroup": "ffdhe8192" 00:19:44.725 } 00:19:44.725 } 00:19:44.725 ]' 00:19:44.725 21:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.725 21:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.725 21:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.725 21:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.725 21:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.725 21:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.725 21:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.725 21:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.986 21:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZWQwNDgwNTI3MzgyYjU4ZWZlOTFiNDVhNjE2NmU5Zjg4NDU2NWQyYWE2NjExMzVhI0+Q1g==: --dhchap-ctrl-secret DHHC-1:03:ZGZhZmFiZGEwNTJmNTVlNzU0NGU5OTRmNDE5NDhiZDAxOTk3M2M1ZDdlYzViMjU5NDM3N2RhZDBjMGRmYmU2N1ieEP8=: 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:45.557 21:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:46.174 request: 00:19:46.174 { 00:19:46.174 "name": "nvme0", 00:19:46.174 "trtype": "tcp", 00:19:46.174 "traddr": "10.0.0.2", 00:19:46.174 "adrfam": "ipv4", 00:19:46.174 "trsvcid": "4420", 00:19:46.174 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:46.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:46.174 "prchk_reftag": false, 00:19:46.174 "prchk_guard": false, 00:19:46.174 "hdgst": false, 00:19:46.174 "ddgst": false, 00:19:46.174 "dhchap_key": "key2", 00:19:46.174 "method": "bdev_nvme_attach_controller", 00:19:46.174 "req_id": 1 00:19:46.174 } 00:19:46.174 Got JSON-RPC error response 00:19:46.174 response: 00:19:46.174 { 00:19:46.174 "code": -5, 00:19:46.174 "message": "Input/output error" 00:19:46.174 } 00:19:46.174 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:46.174 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:46.174 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:46.174 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:46.174 21:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:46.174 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.174 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.174 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.174 21:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.174 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.174 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.174 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.174 21:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:46.174 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:46.175 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:46.175 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:46.175 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.175 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:46.175 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.175 21:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:46.175 21:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:46.766 request: 00:19:46.766 { 00:19:46.766 "name": "nvme0", 00:19:46.766 "trtype": "tcp", 00:19:46.766 "traddr": "10.0.0.2", 00:19:46.766 "adrfam": "ipv4", 00:19:46.766 "trsvcid": "4420", 00:19:46.766 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:46.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:46.766 "prchk_reftag": false, 00:19:46.766 "prchk_guard": false, 00:19:46.766 "hdgst": false, 00:19:46.766 "ddgst": false, 00:19:46.766 "dhchap_key": "key1", 00:19:46.766 "dhchap_ctrlr_key": "ckey2", 00:19:46.766 "method": "bdev_nvme_attach_controller", 00:19:46.766 "req_id": 1 00:19:46.766 } 00:19:46.766 Got JSON-RPC error response 00:19:46.766 response: 00:19:46.766 { 00:19:46.766 "code": -5, 00:19:46.766 "message": "Input/output error" 00:19:46.766 } 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.766 21:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.026 request: 00:19:47.026 { 00:19:47.026 "name": "nvme0", 00:19:47.026 "trtype": "tcp", 00:19:47.026 "traddr": "10.0.0.2", 00:19:47.026 "adrfam": "ipv4", 00:19:47.026 "trsvcid": "4420", 00:19:47.027 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:47.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:47.027 "prchk_reftag": false, 00:19:47.027 "prchk_guard": false, 00:19:47.027 "hdgst": false, 00:19:47.027 "ddgst": false, 00:19:47.027 "dhchap_key": "key1", 00:19:47.027 "dhchap_ctrlr_key": "ckey1", 00:19:47.027 "method": "bdev_nvme_attach_controller", 00:19:47.027 "req_id": 1 00:19:47.027 } 00:19:47.027 Got JSON-RPC error response 00:19:47.027 response: 00:19:47.027 { 00:19:47.027 "code": -5, 00:19:47.027 "message": "Input/output error" 00:19:47.027 } 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2171437 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2171437 ']' 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2171437 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2171437 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2171437' 00:19:47.288 killing process with pid 2171437 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2171437 00:19:47.288 21:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2171437 00:19:47.288 21:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:47.288 21:35:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:47.288 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:47.288 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.288 21:35:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2197909 00:19:47.288 21:35:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2197909 00:19:47.288 21:35:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:47.288 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2197909 ']' 00:19:47.288 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.288 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.288 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.288 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.288 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.252 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.252 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:48.252 21:35:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:48.252 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:48.252 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.252 21:35:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.252 21:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:48.252 21:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2197909 00:19:48.252 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2197909 ']' 00:19:48.252 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.252 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.252 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.252 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.252 21:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.513 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.086 00:19:49.086 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.086 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.086 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.086 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.086 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.086 21:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.086 21:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.086 21:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.086 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.086 { 00:19:49.086 "cntlid": 1, 00:19:49.086 "qid": 0, 00:19:49.086 "state": "enabled", 00:19:49.086 "thread": "nvmf_tgt_poll_group_000", 00:19:49.086 "listen_address": { 00:19:49.086 "trtype": "TCP", 00:19:49.086 "adrfam": "IPv4", 00:19:49.086 "traddr": "10.0.0.2", 00:19:49.086 "trsvcid": "4420" 00:19:49.086 }, 00:19:49.086 "peer_address": { 00:19:49.086 "trtype": "TCP", 00:19:49.086 "adrfam": "IPv4", 00:19:49.086 "traddr": "10.0.0.1", 00:19:49.086 "trsvcid": "33604" 00:19:49.086 }, 00:19:49.086 "auth": { 00:19:49.086 "state": "completed", 00:19:49.086 "digest": "sha512", 00:19:49.086 "dhgroup": "ffdhe8192" 00:19:49.086 } 00:19:49.086 } 00:19:49.086 ]' 00:19:49.086 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.347 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.347 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.347 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.347 21:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.347 21:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.347 21:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.347 21:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.607 21:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Mzg4YWQwMWY4YjU5MjgxMThjY2FjMjc4YWM2YjQ0MjA2ZTdiNTczMjAwNDMyNjJjZjEwMzAzNjBjODRlODcyMSKnTgc=: 00:19:50.179 21:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.179 21:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:50.179 21:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.179 21:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.179 21:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.179 21:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:50.179 21:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.179 21:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.179 21:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.179 21:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:50.179 21:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:50.440 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.440 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:50.440 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.440 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:50.440 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.440 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:50.440 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.440 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.440 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.701 request: 00:19:50.701 { 00:19:50.701 "name": "nvme0", 00:19:50.701 "trtype": "tcp", 00:19:50.701 "traddr": "10.0.0.2", 00:19:50.701 "adrfam": "ipv4", 00:19:50.701 "trsvcid": "4420", 00:19:50.701 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:50.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:50.701 "prchk_reftag": false, 00:19:50.701 "prchk_guard": false, 00:19:50.701 "hdgst": false, 00:19:50.701 "ddgst": false, 00:19:50.701 "dhchap_key": "key3", 00:19:50.701 "method": "bdev_nvme_attach_controller", 00:19:50.701 "req_id": 1 00:19:50.701 } 00:19:50.701 Got JSON-RPC error response 00:19:50.701 response: 00:19:50.701 { 00:19:50.701 "code": -5, 00:19:50.701 "message": "Input/output error" 00:19:50.701 } 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.701 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.962 request: 00:19:50.962 { 00:19:50.962 "name": "nvme0", 00:19:50.962 "trtype": "tcp", 00:19:50.962 "traddr": "10.0.0.2", 00:19:50.962 "adrfam": "ipv4", 00:19:50.962 "trsvcid": "4420", 00:19:50.962 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:50.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:50.962 "prchk_reftag": false, 00:19:50.962 "prchk_guard": false, 00:19:50.962 "hdgst": false, 00:19:50.962 "ddgst": false, 00:19:50.962 "dhchap_key": "key3", 00:19:50.962 "method": "bdev_nvme_attach_controller", 00:19:50.962 "req_id": 1 00:19:50.962 } 00:19:50.962 Got JSON-RPC error response 00:19:50.962 response: 00:19:50.962 { 00:19:50.962 "code": -5, 00:19:50.962 "message": "Input/output error" 00:19:50.962 } 00:19:50.962 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:50.962 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:50.962 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:50.962 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:50.962 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:50.962 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:50.962 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:50.962 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:50.962 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:50.962 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:51.223 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:51.223 request: 00:19:51.223 { 00:19:51.223 "name": "nvme0", 00:19:51.223 "trtype": "tcp", 00:19:51.223 "traddr": "10.0.0.2", 00:19:51.223 "adrfam": "ipv4", 00:19:51.223 "trsvcid": "4420", 00:19:51.223 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:51.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:51.223 "prchk_reftag": false, 00:19:51.223 "prchk_guard": false, 00:19:51.223 "hdgst": false, 00:19:51.223 "ddgst": false, 00:19:51.223 "dhchap_key": "key0", 00:19:51.223 "dhchap_ctrlr_key": "key1", 00:19:51.223 "method": "bdev_nvme_attach_controller", 00:19:51.223 "req_id": 1 00:19:51.223 } 00:19:51.223 Got JSON-RPC error response 00:19:51.223 response: 00:19:51.223 { 00:19:51.223 "code": -5, 00:19:51.223 "message": "Input/output error" 00:19:51.223 } 00:19:51.224 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:51.224 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:51.224 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:51.224 21:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:51.224 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:51.224 21:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:51.484 00:19:51.484 21:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:51.484 21:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:51.484 21:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.745 21:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.745 21:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.745 21:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.745 21:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:51.745 21:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:51.745 21:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2171544 00:19:51.745 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2171544 ']' 00:19:51.745 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2171544 00:19:51.745 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:51.745 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.745 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2171544 00:19:52.006 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:52.006 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:52.006 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2171544' 00:19:52.006 killing process with pid 2171544 00:19:52.006 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2171544 00:19:52.006 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2171544 00:19:52.006 21:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:52.006 21:35:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:52.006 21:35:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:52.006 21:35:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:52.006 21:35:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:52.006 21:35:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:52.006 21:35:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:52.006 rmmod nvme_tcp 00:19:52.006 rmmod nvme_fabrics 00:19:52.267 rmmod nvme_keyring 00:19:52.267 21:35:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:52.267 21:35:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:52.267 21:35:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:52.267 21:35:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2197909 ']' 00:19:52.267 21:35:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2197909 00:19:52.267 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2197909 ']' 00:19:52.267 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2197909 00:19:52.267 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:52.267 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.267 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2197909 00:19:52.267 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:52.267 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:52.267 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2197909' 00:19:52.267 killing process with pid 2197909 00:19:52.267 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2197909 00:19:52.267 21:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2197909 00:19:52.267 21:35:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:52.267 21:35:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:52.267 21:35:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:52.267 21:35:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:52.267 21:35:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:52.267 21:35:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.267 21:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.267 21:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.812 21:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:54.812 21:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.XNd /tmp/spdk.key-sha256.LfR /tmp/spdk.key-sha384.KDu /tmp/spdk.key-sha512.94Z /tmp/spdk.key-sha512.hNV /tmp/spdk.key-sha384.JNI /tmp/spdk.key-sha256.O09 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:54.812 00:19:54.812 real 2m24.474s 00:19:54.812 user 5m20.983s 00:19:54.812 sys 0m21.644s 00:19:54.812 21:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:54.812 21:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.812 ************************************ 00:19:54.812 END TEST nvmf_auth_target 00:19:54.812 ************************************ 00:19:54.812 21:35:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:54.812 21:35:44 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:54.812 21:35:44 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:54.812 21:35:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:54.812 21:35:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.812 21:35:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:54.812 ************************************ 00:19:54.812 START TEST nvmf_bdevio_no_huge 00:19:54.812 ************************************ 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:54.812 * Looking for test storage... 00:19:54.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:54.812 21:35:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:01.402 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:01.402 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:01.402 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:01.402 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:01.402 21:35:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:01.402 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:01.402 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:01.402 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:01.402 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.402 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.402 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:01.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:20:01.663 00:20:01.663 --- 10.0.0.2 ping statistics --- 00:20:01.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.663 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:20:01.663 00:20:01.663 --- 10.0.0.1 ping statistics --- 00:20:01.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.663 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2202956 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2202956 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 2202956 ']' 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:01.663 21:35:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.663 [2024-07-15 21:35:51.321985] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:20:01.663 [2024-07-15 21:35:51.322051] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:01.663 [2024-07-15 21:35:51.414245] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:01.924 [2024-07-15 21:35:51.522219] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.924 [2024-07-15 21:35:51.522273] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.924 [2024-07-15 21:35:51.522281] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.924 [2024-07-15 21:35:51.522288] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.924 [2024-07-15 21:35:51.522294] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.924 [2024-07-15 21:35:51.522457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:01.924 [2024-07-15 21:35:51.522596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:01.924 [2024-07-15 21:35:51.522757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.924 [2024-07-15 21:35:51.522758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:02.495 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:02.495 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:02.495 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:02.495 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:02.495 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.495 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.495 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:02.495 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.495 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.496 [2024-07-15 21:35:52.157256] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.496 Malloc0 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.496 [2024-07-15 21:35:52.211048] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:02.496 { 00:20:02.496 "params": { 00:20:02.496 "name": "Nvme$subsystem", 00:20:02.496 "trtype": "$TEST_TRANSPORT", 00:20:02.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.496 "adrfam": "ipv4", 00:20:02.496 "trsvcid": "$NVMF_PORT", 00:20:02.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.496 "hdgst": ${hdgst:-false}, 00:20:02.496 "ddgst": ${ddgst:-false} 00:20:02.496 }, 00:20:02.496 "method": "bdev_nvme_attach_controller" 00:20:02.496 } 00:20:02.496 EOF 00:20:02.496 )") 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:02.496 21:35:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:02.496 "params": { 00:20:02.496 "name": "Nvme1", 00:20:02.496 "trtype": "tcp", 00:20:02.496 "traddr": "10.0.0.2", 00:20:02.496 "adrfam": "ipv4", 00:20:02.496 "trsvcid": "4420", 00:20:02.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.496 "hdgst": false, 00:20:02.496 "ddgst": false 00:20:02.496 }, 00:20:02.496 "method": "bdev_nvme_attach_controller" 00:20:02.496 }' 00:20:02.496 [2024-07-15 21:35:52.268677] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:20:02.496 [2024-07-15 21:35:52.268742] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2203297 ] 00:20:02.756 [2024-07-15 21:35:52.335801] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:02.756 [2024-07-15 21:35:52.432817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.756 [2024-07-15 21:35:52.432937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.756 [2024-07-15 21:35:52.432940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.018 I/O targets: 00:20:03.018 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:03.018 00:20:03.018 00:20:03.018 CUnit - A unit testing framework for C - Version 2.1-3 00:20:03.018 http://cunit.sourceforge.net/ 00:20:03.018 00:20:03.018 00:20:03.018 Suite: bdevio tests on: Nvme1n1 00:20:03.018 Test: blockdev write read block ...passed 00:20:03.018 Test: blockdev write zeroes read block ...passed 00:20:03.018 Test: blockdev write zeroes read no split ...passed 00:20:03.018 Test: blockdev write zeroes read split ...passed 00:20:03.018 Test: blockdev write zeroes read split partial ...passed 00:20:03.018 Test: blockdev reset ...[2024-07-15 21:35:52.811572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:03.018 [2024-07-15 21:35:52.811631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa18db0 (9): Bad file descriptor 00:20:03.279 [2024-07-15 21:35:52.920585] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:03.279 passed 00:20:03.279 Test: blockdev write read 8 blocks ...passed 00:20:03.279 Test: blockdev write read size > 128k ...passed 00:20:03.279 Test: blockdev write read invalid size ...passed 00:20:03.279 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:03.279 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:03.279 Test: blockdev write read max offset ...passed 00:20:03.539 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:03.539 Test: blockdev writev readv 8 blocks ...passed 00:20:03.539 Test: blockdev writev readv 30 x 1block ...passed 00:20:03.539 Test: blockdev writev readv block ...passed 00:20:03.539 Test: blockdev writev readv size > 128k ...passed 00:20:03.539 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:03.539 Test: blockdev comparev and writev ...[2024-07-15 21:35:53.188697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.539 [2024-07-15 21:35:53.188720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.539 [2024-07-15 21:35:53.188730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.539 [2024-07-15 21:35:53.188736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.539 [2024-07-15 21:35:53.189172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.539 [2024-07-15 21:35:53.189180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:03.539 [2024-07-15 21:35:53.189190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.539 [2024-07-15 21:35:53.189195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:03.539 [2024-07-15 21:35:53.189634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.539 [2024-07-15 21:35:53.189641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:03.539 [2024-07-15 21:35:53.189650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.539 [2024-07-15 21:35:53.189655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:03.539 [2024-07-15 21:35:53.190115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.539 [2024-07-15 21:35:53.190125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:03.539 [2024-07-15 21:35:53.190134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.539 [2024-07-15 21:35:53.190139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:03.539 passed 00:20:03.539 Test: blockdev nvme passthru rw ...passed 00:20:03.539 Test: blockdev nvme passthru vendor specific ...[2024-07-15 21:35:53.274882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.539 [2024-07-15 21:35:53.274892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:03.539 [2024-07-15 21:35:53.275233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.539 [2024-07-15 21:35:53.275241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:03.539 [2024-07-15 21:35:53.275566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.539 [2024-07-15 21:35:53.275572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:03.539 [2024-07-15 21:35:53.275886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.539 [2024-07-15 21:35:53.275893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:03.539 passed 00:20:03.539 Test: blockdev nvme admin passthru ...passed 00:20:03.539 Test: blockdev copy ...passed 00:20:03.539 00:20:03.539 Run Summary: Type Total Ran Passed Failed Inactive 00:20:03.539 suites 1 1 n/a 0 0 00:20:03.539 tests 23 23 23 0 0 00:20:03.539 asserts 152 152 152 0 n/a 00:20:03.539 00:20:03.539 Elapsed time = 1.432 seconds 00:20:03.799 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:03.799 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.799 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:04.059 rmmod nvme_tcp 00:20:04.059 rmmod nvme_fabrics 00:20:04.059 rmmod nvme_keyring 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2202956 ']' 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2202956 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 2202956 ']' 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 2202956 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2202956 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2202956' 00:20:04.059 killing process with pid 2202956 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 2202956 00:20:04.059 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 2202956 00:20:04.319 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:04.319 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:04.319 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:04.319 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:04.319 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:04.319 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.319 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.319 21:35:53 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.319 21:35:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:06.319 00:20:06.319 real 0m11.856s 00:20:06.319 user 0m13.933s 00:20:06.319 sys 0m6.127s 00:20:06.319 21:35:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:06.319 21:35:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:06.319 ************************************ 00:20:06.319 END TEST nvmf_bdevio_no_huge 00:20:06.319 ************************************ 00:20:06.319 21:35:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:06.319 21:35:56 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:06.319 21:35:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:06.319 21:35:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:06.319 21:35:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:06.579 ************************************ 00:20:06.579 START TEST nvmf_tls 00:20:06.579 ************************************ 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:06.579 * Looking for test storage... 00:20:06.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.579 21:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.580 21:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.580 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:06.580 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:06.580 21:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:06.580 21:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.725 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.725 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:14.725 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:14.725 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:14.725 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:14.726 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:14.726 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:14.726 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:14.726 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:14.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:20:14.726 00:20:14.726 --- 10.0.0.2 ping statistics --- 00:20:14.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.726 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:20:14.726 00:20:14.726 --- 10.0.0.1 ping statistics --- 00:20:14.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.726 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2207753 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2207753 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2207753 ']' 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.726 21:36:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.726 [2024-07-15 21:36:03.601208] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:20:14.726 [2024-07-15 21:36:03.601303] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.726 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.726 [2024-07-15 21:36:03.694749] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.726 [2024-07-15 21:36:03.788114] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.726 [2024-07-15 21:36:03.788175] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.726 [2024-07-15 21:36:03.788183] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.726 [2024-07-15 21:36:03.788190] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.726 [2024-07-15 21:36:03.788196] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.726 [2024-07-15 21:36:03.788221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.726 21:36:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.726 21:36:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:14.726 21:36:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:14.726 21:36:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:14.726 21:36:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.727 21:36:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.727 21:36:04 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:14.727 21:36:04 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:14.988 true 00:20:14.988 21:36:04 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:14.988 21:36:04 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:14.988 21:36:04 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:14.988 21:36:04 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:14.988 21:36:04 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:15.249 21:36:04 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:15.249 21:36:04 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:15.510 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:15.510 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:15.510 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:15.510 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:15.510 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:15.771 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:15.771 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:15.771 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:15.771 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:16.032 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:16.032 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:16.032 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:16.032 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:16.032 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:16.293 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:16.293 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:16.293 21:36:05 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:16.293 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:16.293 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.1khaWzZplv 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.QTLLGiA2AF 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.1khaWzZplv 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.QTLLGiA2AF 00:20:16.554 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:16.815 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:17.075 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.1khaWzZplv 00:20:17.075 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1khaWzZplv 00:20:17.075 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:17.075 [2024-07-15 21:36:06.848046] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.075 21:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:17.336 21:36:07 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:17.596 [2024-07-15 21:36:07.152792] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.596 [2024-07-15 21:36:07.152976] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.596 21:36:07 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:17.596 malloc0 00:20:17.596 21:36:07 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:17.857 21:36:07 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1khaWzZplv 00:20:17.857 [2024-07-15 21:36:07.603804] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:17.857 21:36:07 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.1khaWzZplv 00:20:17.857 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.091 Initializing NVMe Controllers 00:20:30.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:30.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:30.091 Initialization complete. Launching workers. 00:20:30.091 ======================================================== 00:20:30.091 Latency(us) 00:20:30.091 Device Information : IOPS MiB/s Average min max 00:20:30.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19017.95 74.29 3365.28 1207.11 5391.37 00:20:30.091 ======================================================== 00:20:30.091 Total : 19017.95 74.29 3365.28 1207.11 5391.37 00:20:30.091 00:20:30.091 21:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1khaWzZplv 00:20:30.091 21:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:30.091 21:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:30.091 21:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:30.091 21:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1khaWzZplv' 00:20:30.091 21:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.091 21:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2210940 00:20:30.091 21:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:30.091 21:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2210940 /var/tmp/bdevperf.sock 00:20:30.091 21:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:30.091 21:36:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2210940 ']' 00:20:30.091 21:36:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.091 21:36:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.091 21:36:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.091 21:36:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.091 21:36:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.091 [2024-07-15 21:36:17.756266] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:20:30.091 [2024-07-15 21:36:17.756323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2210940 ] 00:20:30.091 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.091 [2024-07-15 21:36:17.804947] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.091 [2024-07-15 21:36:17.857333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.091 21:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:30.091 21:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:30.091 21:36:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1khaWzZplv 00:20:30.091 [2024-07-15 21:36:18.641927] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.091 [2024-07-15 21:36:18.641981] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:30.091 TLSTESTn1 00:20:30.091 21:36:18 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:30.091 Running I/O for 10 seconds... 00:20:40.124 00:20:40.124 Latency(us) 00:20:40.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.124 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:40.124 Verification LBA range: start 0x0 length 0x2000 00:20:40.124 TLSTESTn1 : 10.04 2415.62 9.44 0.00 0.00 52874.23 6335.15 137188.69 00:20:40.124 =================================================================================================================== 00:20:40.124 Total : 2415.62 9.44 0.00 0.00 52874.23 6335.15 137188.69 00:20:40.124 0 00:20:40.124 21:36:28 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:40.124 21:36:28 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2210940 00:20:40.124 21:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2210940 ']' 00:20:40.124 21:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2210940 00:20:40.124 21:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:40.124 21:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:40.124 21:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2210940 00:20:40.124 21:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:40.124 21:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:40.124 21:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2210940' 00:20:40.124 killing process with pid 2210940 00:20:40.124 21:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2210940 00:20:40.124 Received shutdown signal, test time was about 10.000000 seconds 00:20:40.124 00:20:40.124 Latency(us) 00:20:40.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.124 =================================================================================================================== 00:20:40.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.124 [2024-07-15 21:36:28.975193] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:40.124 21:36:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2210940 00:20:40.124 21:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QTLLGiA2AF 00:20:40.124 21:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:40.124 21:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QTLLGiA2AF 00:20:40.124 21:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:40.124 21:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.124 21:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:40.124 21:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.124 21:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QTLLGiA2AF 00:20:40.124 21:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:40.124 21:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:40.124 21:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:40.125 21:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QTLLGiA2AF' 00:20:40.125 21:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:40.125 21:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2213281 00:20:40.125 21:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.125 21:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:40.125 21:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2213281 /var/tmp/bdevperf.sock 00:20:40.125 21:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2213281 ']' 00:20:40.125 21:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.125 21:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.125 21:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.125 21:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.125 21:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.125 [2024-07-15 21:36:29.139280] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:20:40.125 [2024-07-15 21:36:29.139336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213281 ] 00:20:40.125 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.125 [2024-07-15 21:36:29.188088] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.125 [2024-07-15 21:36:29.239433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.125 21:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.125 21:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:40.125 21:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QTLLGiA2AF 00:20:40.386 [2024-07-15 21:36:30.052096] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.386 [2024-07-15 21:36:30.052164] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:40.386 [2024-07-15 21:36:30.057873] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:40.386 [2024-07-15 21:36:30.058350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e9a0b0 (107): Transport endpoint is not connected 00:20:40.386 [2024-07-15 21:36:30.059345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e9a0b0 (9): Bad file descriptor 00:20:40.386 [2024-07-15 21:36:30.060345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:40.386 [2024-07-15 21:36:30.060353] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:40.386 [2024-07-15 21:36:30.060359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:40.386 request: 00:20:40.386 { 00:20:40.386 "name": "TLSTEST", 00:20:40.386 "trtype": "tcp", 00:20:40.386 "traddr": "10.0.0.2", 00:20:40.386 "adrfam": "ipv4", 00:20:40.386 "trsvcid": "4420", 00:20:40.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.386 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.386 "prchk_reftag": false, 00:20:40.386 "prchk_guard": false, 00:20:40.386 "hdgst": false, 00:20:40.386 "ddgst": false, 00:20:40.386 "psk": "/tmp/tmp.QTLLGiA2AF", 00:20:40.386 "method": "bdev_nvme_attach_controller", 00:20:40.386 "req_id": 1 00:20:40.386 } 00:20:40.386 Got JSON-RPC error response 00:20:40.386 response: 00:20:40.386 { 00:20:40.386 "code": -5, 00:20:40.386 "message": "Input/output error" 00:20:40.386 } 00:20:40.386 21:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2213281 00:20:40.386 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2213281 ']' 00:20:40.386 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2213281 00:20:40.386 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:40.386 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:40.386 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2213281 00:20:40.386 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:40.386 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:40.386 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2213281' 00:20:40.386 killing process with pid 2213281 00:20:40.386 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2213281 00:20:40.386 Received shutdown signal, test time was about 10.000000 seconds 00:20:40.386 00:20:40.386 Latency(us) 00:20:40.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.386 =================================================================================================================== 00:20:40.386 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:40.386 [2024-07-15 21:36:30.133969] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:40.386 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2213281 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1khaWzZplv 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1khaWzZplv 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1khaWzZplv 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1khaWzZplv' 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2213420 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2213420 /var/tmp/bdevperf.sock 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2213420 ']' 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.647 21:36:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.647 [2024-07-15 21:36:30.291971] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:20:40.647 [2024-07-15 21:36:30.292029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213420 ] 00:20:40.647 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.647 [2024-07-15 21:36:30.342139] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.647 [2024-07-15 21:36:30.393769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.632 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.632 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:41.632 21:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.1khaWzZplv 00:20:41.632 [2024-07-15 21:36:31.202752] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.632 [2024-07-15 21:36:31.202818] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:41.632 [2024-07-15 21:36:31.210608] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:41.632 [2024-07-15 21:36:31.210626] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:41.632 [2024-07-15 21:36:31.210644] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:41.632 [2024-07-15 21:36:31.210925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68a0b0 (107): Transport endpoint is not connected 00:20:41.633 [2024-07-15 21:36:31.211918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68a0b0 (9): Bad file descriptor 00:20:41.633 [2024-07-15 21:36:31.212920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.633 [2024-07-15 21:36:31.212926] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:41.633 [2024-07-15 21:36:31.212933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.633 request: 00:20:41.633 { 00:20:41.633 "name": "TLSTEST", 00:20:41.633 "trtype": "tcp", 00:20:41.633 "traddr": "10.0.0.2", 00:20:41.633 "adrfam": "ipv4", 00:20:41.633 "trsvcid": "4420", 00:20:41.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.633 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:41.633 "prchk_reftag": false, 00:20:41.633 "prchk_guard": false, 00:20:41.633 "hdgst": false, 00:20:41.633 "ddgst": false, 00:20:41.633 "psk": "/tmp/tmp.1khaWzZplv", 00:20:41.633 "method": "bdev_nvme_attach_controller", 00:20:41.633 "req_id": 1 00:20:41.633 } 00:20:41.633 Got JSON-RPC error response 00:20:41.633 response: 00:20:41.633 { 00:20:41.633 "code": -5, 00:20:41.633 "message": "Input/output error" 00:20:41.633 } 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2213420 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2213420 ']' 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2213420 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2213420 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2213420' 00:20:41.633 killing process with pid 2213420 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2213420 00:20:41.633 Received shutdown signal, test time was about 10.000000 seconds 00:20:41.633 00:20:41.633 Latency(us) 00:20:41.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.633 =================================================================================================================== 00:20:41.633 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:41.633 [2024-07-15 21:36:31.298078] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2213420 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1khaWzZplv 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1khaWzZplv 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1khaWzZplv 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1khaWzZplv' 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2213642 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2213642 /var/tmp/bdevperf.sock 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2213642 ']' 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.633 21:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.895 [2024-07-15 21:36:31.454208] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:20:41.895 [2024-07-15 21:36:31.454264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213642 ] 00:20:41.895 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.895 [2024-07-15 21:36:31.504274] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.895 [2024-07-15 21:36:31.554952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.466 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:42.466 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:42.466 21:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1khaWzZplv 00:20:42.727 [2024-07-15 21:36:32.359700] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:42.727 [2024-07-15 21:36:32.359762] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:42.727 [2024-07-15 21:36:32.364089] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:42.727 [2024-07-15 21:36:32.364105] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:42.727 [2024-07-15 21:36:32.364129] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:42.727 [2024-07-15 21:36:32.364792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be70b0 (107): Transport endpoint is not connected 00:20:42.727 [2024-07-15 21:36:32.365784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be70b0 (9): Bad file descriptor 00:20:42.727 [2024-07-15 21:36:32.366786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:42.727 [2024-07-15 21:36:32.366793] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:42.727 [2024-07-15 21:36:32.366800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:42.727 request: 00:20:42.727 { 00:20:42.727 "name": "TLSTEST", 00:20:42.727 "trtype": "tcp", 00:20:42.727 "traddr": "10.0.0.2", 00:20:42.727 "adrfam": "ipv4", 00:20:42.727 "trsvcid": "4420", 00:20:42.727 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:42.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.727 "prchk_reftag": false, 00:20:42.727 "prchk_guard": false, 00:20:42.727 "hdgst": false, 00:20:42.727 "ddgst": false, 00:20:42.727 "psk": "/tmp/tmp.1khaWzZplv", 00:20:42.727 "method": "bdev_nvme_attach_controller", 00:20:42.727 "req_id": 1 00:20:42.727 } 00:20:42.727 Got JSON-RPC error response 00:20:42.727 response: 00:20:42.727 { 00:20:42.727 "code": -5, 00:20:42.727 "message": "Input/output error" 00:20:42.727 } 00:20:42.727 21:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2213642 00:20:42.727 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2213642 ']' 00:20:42.727 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2213642 00:20:42.727 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:42.727 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.727 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2213642 00:20:42.727 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:42.727 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:42.727 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2213642' 00:20:42.727 killing process with pid 2213642 00:20:42.727 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2213642 00:20:42.727 Received shutdown signal, test time was about 10.000000 seconds 00:20:42.727 00:20:42.727 Latency(us) 00:20:42.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.727 =================================================================================================================== 00:20:42.727 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:42.727 [2024-07-15 21:36:32.435819] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:42.727 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2213642 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2213980 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2213980 /var/tmp/bdevperf.sock 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2213980 ']' 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:42.988 21:36:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.988 [2024-07-15 21:36:32.594015] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:20:42.988 [2024-07-15 21:36:32.594071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213980 ] 00:20:42.988 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.988 [2024-07-15 21:36:32.644174] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.988 [2024-07-15 21:36:32.694732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:43.928 [2024-07-15 21:36:33.506213] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:43.928 [2024-07-15 21:36:33.507929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bf6e0 (9): Bad file descriptor 00:20:43.928 [2024-07-15 21:36:33.508928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:43.928 [2024-07-15 21:36:33.508936] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:43.928 [2024-07-15 21:36:33.508942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:43.928 request: 00:20:43.928 { 00:20:43.928 "name": "TLSTEST", 00:20:43.928 "trtype": "tcp", 00:20:43.928 "traddr": "10.0.0.2", 00:20:43.928 "adrfam": "ipv4", 00:20:43.928 "trsvcid": "4420", 00:20:43.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:43.928 "prchk_reftag": false, 00:20:43.928 "prchk_guard": false, 00:20:43.928 "hdgst": false, 00:20:43.928 "ddgst": false, 00:20:43.928 "method": "bdev_nvme_attach_controller", 00:20:43.928 "req_id": 1 00:20:43.928 } 00:20:43.928 Got JSON-RPC error response 00:20:43.928 response: 00:20:43.928 { 00:20:43.928 "code": -5, 00:20:43.928 "message": "Input/output error" 00:20:43.928 } 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2213980 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2213980 ']' 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2213980 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2213980 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2213980' 00:20:43.928 killing process with pid 2213980 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2213980 00:20:43.928 Received shutdown signal, test time was about 10.000000 seconds 00:20:43.928 00:20:43.928 Latency(us) 00:20:43.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.928 =================================================================================================================== 00:20:43.928 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2213980 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2207753 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2207753 ']' 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2207753 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.928 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2207753 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2207753' 00:20:44.189 killing process with pid 2207753 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2207753 00:20:44.189 [2024-07-15 21:36:33.750237] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2207753 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.DlrdnXcx52 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.DlrdnXcx52 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2214231 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2214231 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2214231 ']' 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.189 21:36:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.189 [2024-07-15 21:36:33.981244] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:20:44.189 [2024-07-15 21:36:33.981301] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.449 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.450 [2024-07-15 21:36:34.063450] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.450 [2024-07-15 21:36:34.115685] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.450 [2024-07-15 21:36:34.115722] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.450 [2024-07-15 21:36:34.115727] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.450 [2024-07-15 21:36:34.115731] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.450 [2024-07-15 21:36:34.115735] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.450 [2024-07-15 21:36:34.115750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.059 21:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.059 21:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:45.059 21:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.059 21:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:45.059 21:36:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.059 21:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.059 21:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.DlrdnXcx52 00:20:45.059 21:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.DlrdnXcx52 00:20:45.059 21:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:45.319 [2024-07-15 21:36:34.913329] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.319 21:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:45.319 21:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:45.579 [2024-07-15 21:36:35.185987] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:45.579 [2024-07-15 21:36:35.186184] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.579 21:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:45.579 malloc0 00:20:45.579 21:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:45.839 21:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DlrdnXcx52 00:20:45.839 [2024-07-15 21:36:35.633065] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:46.100 21:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DlrdnXcx52 00:20:46.100 21:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:46.100 21:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:46.100 21:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:46.100 21:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.DlrdnXcx52' 00:20:46.100 21:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:46.100 21:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2214598 00:20:46.100 21:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:46.100 21:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2214598 /var/tmp/bdevperf.sock 00:20:46.100 21:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:46.100 21:36:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2214598 ']' 00:20:46.100 21:36:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.100 21:36:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.100 21:36:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.100 21:36:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.100 21:36:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.100 [2024-07-15 21:36:35.699190] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:20:46.100 [2024-07-15 21:36:35.699240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2214598 ] 00:20:46.100 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.100 [2024-07-15 21:36:35.747911] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.100 [2024-07-15 21:36:35.800418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.670 21:36:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:46.670 21:36:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:46.670 21:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DlrdnXcx52 00:20:46.931 [2024-07-15 21:36:36.609097] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.931 [2024-07-15 21:36:36.609157] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:46.931 TLSTESTn1 00:20:46.931 21:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:47.192 Running I/O for 10 seconds... 00:20:57.195 00:20:57.195 Latency(us) 00:20:57.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.195 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:57.195 Verification LBA range: start 0x0 length 0x2000 00:20:57.195 TLSTESTn1 : 10.06 2501.98 9.77 0.00 0.00 51000.91 6007.47 98740.91 00:20:57.195 =================================================================================================================== 00:20:57.195 Total : 2501.98 9.77 0.00 0.00 51000.91 6007.47 98740.91 00:20:57.195 0 00:20:57.195 21:36:46 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.195 21:36:46 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2214598 00:20:57.195 21:36:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2214598 ']' 00:20:57.195 21:36:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2214598 00:20:57.195 21:36:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:57.195 21:36:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.195 21:36:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2214598 00:20:57.195 21:36:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:57.195 21:36:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:57.195 21:36:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2214598' 00:20:57.195 killing process with pid 2214598 00:20:57.195 21:36:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2214598 00:20:57.195 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.195 00:20:57.195 Latency(us) 00:20:57.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.195 =================================================================================================================== 00:20:57.195 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.195 [2024-07-15 21:36:46.956914] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:57.195 21:36:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2214598 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.DlrdnXcx52 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DlrdnXcx52 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DlrdnXcx52 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DlrdnXcx52 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.DlrdnXcx52' 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2216711 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2216711 /var/tmp/bdevperf.sock 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2216711 ']' 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.456 21:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.456 [2024-07-15 21:36:47.127685] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:20:57.456 [2024-07-15 21:36:47.127741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216711 ] 00:20:57.456 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.456 [2024-07-15 21:36:47.177700] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.456 [2024-07-15 21:36:47.231399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.399 21:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.399 21:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:58.399 21:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DlrdnXcx52 00:20:58.399 [2024-07-15 21:36:48.044177] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.399 [2024-07-15 21:36:48.044219] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:58.399 [2024-07-15 21:36:48.044225] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.DlrdnXcx52 00:20:58.399 request: 00:20:58.399 { 00:20:58.399 "name": "TLSTEST", 00:20:58.399 "trtype": "tcp", 00:20:58.399 "traddr": "10.0.0.2", 00:20:58.399 "adrfam": "ipv4", 00:20:58.399 "trsvcid": "4420", 00:20:58.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.399 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.399 "prchk_reftag": false, 00:20:58.399 "prchk_guard": false, 00:20:58.399 "hdgst": false, 00:20:58.399 "ddgst": false, 00:20:58.399 "psk": "/tmp/tmp.DlrdnXcx52", 00:20:58.399 "method": "bdev_nvme_attach_controller", 00:20:58.399 "req_id": 1 00:20:58.399 } 00:20:58.399 Got JSON-RPC error response 00:20:58.399 response: 00:20:58.399 { 00:20:58.399 "code": -1, 00:20:58.399 "message": "Operation not permitted" 00:20:58.399 } 00:20:58.399 21:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2216711 00:20:58.399 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2216711 ']' 00:20:58.399 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2216711 00:20:58.399 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:58.399 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.399 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2216711 00:20:58.399 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:58.399 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:58.399 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2216711' 00:20:58.399 killing process with pid 2216711 00:20:58.399 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2216711 00:20:58.399 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.399 00:20:58.399 Latency(us) 00:20:58.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.399 =================================================================================================================== 00:20:58.399 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:58.399 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2216711 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2214231 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2214231 ']' 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2214231 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2214231 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2214231' 00:20:58.660 killing process with pid 2214231 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2214231 00:20:58.660 [2024-07-15 21:36:48.281875] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2214231 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2217060 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2217060 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2217060 ']' 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.660 21:36:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.660 [2024-07-15 21:36:48.459855] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:20:58.661 [2024-07-15 21:36:48.459910] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.921 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.921 [2024-07-15 21:36:48.541901] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.921 [2024-07-15 21:36:48.593518] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.921 [2024-07-15 21:36:48.593549] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.921 [2024-07-15 21:36:48.593555] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.921 [2024-07-15 21:36:48.593560] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.921 [2024-07-15 21:36:48.593563] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.921 [2024-07-15 21:36:48.593587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.491 21:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.491 21:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:59.491 21:36:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:59.491 21:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:59.491 21:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.752 21:36:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.752 21:36:49 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.DlrdnXcx52 00:20:59.752 21:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:59.752 21:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.DlrdnXcx52 00:20:59.752 21:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:59.752 21:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.752 21:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:59.752 21:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.752 21:36:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.DlrdnXcx52 00:20:59.752 21:36:49 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.DlrdnXcx52 00:20:59.752 21:36:49 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:59.752 [2024-07-15 21:36:49.455879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.752 21:36:49 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:00.013 21:36:49 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:00.013 [2024-07-15 21:36:49.748595] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:00.013 [2024-07-15 21:36:49.748776] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.013 21:36:49 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:00.275 malloc0 00:21:00.275 21:36:49 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:00.275 21:36:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DlrdnXcx52 00:21:00.535 [2024-07-15 21:36:50.179370] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:00.535 [2024-07-15 21:36:50.179396] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:00.535 [2024-07-15 21:36:50.179417] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:00.535 request: 00:21:00.535 { 00:21:00.535 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.535 "host": "nqn.2016-06.io.spdk:host1", 00:21:00.535 "psk": "/tmp/tmp.DlrdnXcx52", 00:21:00.535 "method": "nvmf_subsystem_add_host", 00:21:00.535 "req_id": 1 00:21:00.535 } 00:21:00.535 Got JSON-RPC error response 00:21:00.535 response: 00:21:00.535 { 00:21:00.535 "code": -32603, 00:21:00.535 "message": "Internal error" 00:21:00.535 } 00:21:00.535 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:00.535 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:00.535 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:00.535 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:00.535 21:36:50 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2217060 00:21:00.535 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2217060 ']' 00:21:00.535 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2217060 00:21:00.535 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:00.535 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:00.535 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2217060 00:21:00.535 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:00.535 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:00.535 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2217060' 00:21:00.535 killing process with pid 2217060 00:21:00.535 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2217060 00:21:00.535 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2217060 00:21:00.795 21:36:50 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.DlrdnXcx52 00:21:00.795 21:36:50 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:00.795 21:36:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:00.796 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:00.796 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.796 21:36:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2217433 00:21:00.796 21:36:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2217433 00:21:00.796 21:36:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:00.796 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2217433 ']' 00:21:00.796 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.796 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.796 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.796 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.796 21:36:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.796 [2024-07-15 21:36:50.433892] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:00.796 [2024-07-15 21:36:50.433947] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.796 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.796 [2024-07-15 21:36:50.513471] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.796 [2024-07-15 21:36:50.565851] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.796 [2024-07-15 21:36:50.565883] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.796 [2024-07-15 21:36:50.565889] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.796 [2024-07-15 21:36:50.565894] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.796 [2024-07-15 21:36:50.565898] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.796 [2024-07-15 21:36:50.565913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.736 21:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.736 21:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:01.736 21:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:01.736 21:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:01.736 21:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.736 21:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.736 21:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.DlrdnXcx52 00:21:01.736 21:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.DlrdnXcx52 00:21:01.736 21:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:01.736 [2024-07-15 21:36:51.364416] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.736 21:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:01.736 21:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:01.996 [2024-07-15 21:36:51.669164] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:01.996 [2024-07-15 21:36:51.669364] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.996 21:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:02.256 malloc0 00:21:02.256 21:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:02.256 21:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DlrdnXcx52 00:21:02.515 [2024-07-15 21:36:52.132264] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:02.515 21:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2217793 00:21:02.515 21:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:02.515 21:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:02.516 21:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2217793 /var/tmp/bdevperf.sock 00:21:02.516 21:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2217793 ']' 00:21:02.516 21:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.516 21:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.516 21:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.516 21:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.516 21:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.516 [2024-07-15 21:36:52.195865] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:02.516 [2024-07-15 21:36:52.195916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217793 ] 00:21:02.516 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.516 [2024-07-15 21:36:52.244766] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.516 [2024-07-15 21:36:52.297416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.519 21:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.519 21:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:03.520 21:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DlrdnXcx52 00:21:03.520 [2024-07-15 21:36:53.090602] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.520 [2024-07-15 21:36:53.090662] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:03.520 TLSTESTn1 00:21:03.520 21:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:03.779 21:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:03.779 "subsystems": [ 00:21:03.779 { 00:21:03.779 "subsystem": "keyring", 00:21:03.779 "config": [] 00:21:03.779 }, 00:21:03.779 { 00:21:03.779 "subsystem": "iobuf", 00:21:03.779 "config": [ 00:21:03.779 { 00:21:03.779 "method": "iobuf_set_options", 00:21:03.779 "params": { 00:21:03.779 "small_pool_count": 8192, 00:21:03.779 "large_pool_count": 1024, 00:21:03.779 "small_bufsize": 8192, 00:21:03.779 "large_bufsize": 135168 00:21:03.779 } 00:21:03.779 } 00:21:03.779 ] 00:21:03.779 }, 00:21:03.779 { 00:21:03.779 "subsystem": "sock", 00:21:03.779 "config": [ 00:21:03.779 { 00:21:03.779 "method": "sock_set_default_impl", 00:21:03.779 "params": { 00:21:03.779 "impl_name": "posix" 00:21:03.779 } 00:21:03.779 }, 00:21:03.779 { 00:21:03.779 "method": "sock_impl_set_options", 00:21:03.779 "params": { 00:21:03.779 "impl_name": "ssl", 00:21:03.779 "recv_buf_size": 4096, 00:21:03.779 "send_buf_size": 4096, 00:21:03.779 "enable_recv_pipe": true, 00:21:03.779 "enable_quickack": false, 00:21:03.779 "enable_placement_id": 0, 00:21:03.779 "enable_zerocopy_send_server": true, 00:21:03.779 "enable_zerocopy_send_client": false, 00:21:03.779 "zerocopy_threshold": 0, 00:21:03.779 "tls_version": 0, 00:21:03.779 "enable_ktls": false 00:21:03.779 } 00:21:03.779 }, 00:21:03.779 { 00:21:03.779 "method": "sock_impl_set_options", 00:21:03.779 "params": { 00:21:03.779 "impl_name": "posix", 00:21:03.779 "recv_buf_size": 2097152, 00:21:03.779 "send_buf_size": 2097152, 00:21:03.779 "enable_recv_pipe": true, 00:21:03.779 "enable_quickack": false, 00:21:03.779 "enable_placement_id": 0, 00:21:03.779 "enable_zerocopy_send_server": true, 00:21:03.779 "enable_zerocopy_send_client": false, 00:21:03.779 "zerocopy_threshold": 0, 00:21:03.779 "tls_version": 0, 00:21:03.779 "enable_ktls": false 00:21:03.779 } 00:21:03.779 } 00:21:03.779 ] 00:21:03.779 }, 00:21:03.779 { 00:21:03.779 "subsystem": "vmd", 00:21:03.779 "config": [] 00:21:03.779 }, 00:21:03.779 { 00:21:03.779 "subsystem": "accel", 00:21:03.779 "config": [ 00:21:03.779 { 00:21:03.779 "method": "accel_set_options", 00:21:03.779 "params": { 00:21:03.780 "small_cache_size": 128, 00:21:03.780 "large_cache_size": 16, 00:21:03.780 "task_count": 2048, 00:21:03.780 "sequence_count": 2048, 00:21:03.780 "buf_count": 2048 00:21:03.780 } 00:21:03.780 } 00:21:03.780 ] 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "subsystem": "bdev", 00:21:03.780 "config": [ 00:21:03.780 { 00:21:03.780 "method": "bdev_set_options", 00:21:03.780 "params": { 00:21:03.780 "bdev_io_pool_size": 65535, 00:21:03.780 "bdev_io_cache_size": 256, 00:21:03.780 "bdev_auto_examine": true, 00:21:03.780 "iobuf_small_cache_size": 128, 00:21:03.780 "iobuf_large_cache_size": 16 00:21:03.780 } 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "method": "bdev_raid_set_options", 00:21:03.780 "params": { 00:21:03.780 "process_window_size_kb": 1024 00:21:03.780 } 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "method": "bdev_iscsi_set_options", 00:21:03.780 "params": { 00:21:03.780 "timeout_sec": 30 00:21:03.780 } 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "method": "bdev_nvme_set_options", 00:21:03.780 "params": { 00:21:03.780 "action_on_timeout": "none", 00:21:03.780 "timeout_us": 0, 00:21:03.780 "timeout_admin_us": 0, 00:21:03.780 "keep_alive_timeout_ms": 10000, 00:21:03.780 "arbitration_burst": 0, 00:21:03.780 "low_priority_weight": 0, 00:21:03.780 "medium_priority_weight": 0, 00:21:03.780 "high_priority_weight": 0, 00:21:03.780 "nvme_adminq_poll_period_us": 10000, 00:21:03.780 "nvme_ioq_poll_period_us": 0, 00:21:03.780 "io_queue_requests": 0, 00:21:03.780 "delay_cmd_submit": true, 00:21:03.780 "transport_retry_count": 4, 00:21:03.780 "bdev_retry_count": 3, 00:21:03.780 "transport_ack_timeout": 0, 00:21:03.780 "ctrlr_loss_timeout_sec": 0, 00:21:03.780 "reconnect_delay_sec": 0, 00:21:03.780 "fast_io_fail_timeout_sec": 0, 00:21:03.780 "disable_auto_failback": false, 00:21:03.780 "generate_uuids": false, 00:21:03.780 "transport_tos": 0, 00:21:03.780 "nvme_error_stat": false, 00:21:03.780 "rdma_srq_size": 0, 00:21:03.780 "io_path_stat": false, 00:21:03.780 "allow_accel_sequence": false, 00:21:03.780 "rdma_max_cq_size": 0, 00:21:03.780 "rdma_cm_event_timeout_ms": 0, 00:21:03.780 "dhchap_digests": [ 00:21:03.780 "sha256", 00:21:03.780 "sha384", 00:21:03.780 "sha512" 00:21:03.780 ], 00:21:03.780 "dhchap_dhgroups": [ 00:21:03.780 "null", 00:21:03.780 "ffdhe2048", 00:21:03.780 "ffdhe3072", 00:21:03.780 "ffdhe4096", 00:21:03.780 "ffdhe6144", 00:21:03.780 "ffdhe8192" 00:21:03.780 ] 00:21:03.780 } 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "method": "bdev_nvme_set_hotplug", 00:21:03.780 "params": { 00:21:03.780 "period_us": 100000, 00:21:03.780 "enable": false 00:21:03.780 } 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "method": "bdev_malloc_create", 00:21:03.780 "params": { 00:21:03.780 "name": "malloc0", 00:21:03.780 "num_blocks": 8192, 00:21:03.780 "block_size": 4096, 00:21:03.780 "physical_block_size": 4096, 00:21:03.780 "uuid": "d792b5fe-298a-4de8-920c-9a6d788edf04", 00:21:03.780 "optimal_io_boundary": 0 00:21:03.780 } 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "method": "bdev_wait_for_examine" 00:21:03.780 } 00:21:03.780 ] 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "subsystem": "nbd", 00:21:03.780 "config": [] 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "subsystem": "scheduler", 00:21:03.780 "config": [ 00:21:03.780 { 00:21:03.780 "method": "framework_set_scheduler", 00:21:03.780 "params": { 00:21:03.780 "name": "static" 00:21:03.780 } 00:21:03.780 } 00:21:03.780 ] 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "subsystem": "nvmf", 00:21:03.780 "config": [ 00:21:03.780 { 00:21:03.780 "method": "nvmf_set_config", 00:21:03.780 "params": { 00:21:03.780 "discovery_filter": "match_any", 00:21:03.780 "admin_cmd_passthru": { 00:21:03.780 "identify_ctrlr": false 00:21:03.780 } 00:21:03.780 } 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "method": "nvmf_set_max_subsystems", 00:21:03.780 "params": { 00:21:03.780 "max_subsystems": 1024 00:21:03.780 } 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "method": "nvmf_set_crdt", 00:21:03.780 "params": { 00:21:03.780 "crdt1": 0, 00:21:03.780 "crdt2": 0, 00:21:03.780 "crdt3": 0 00:21:03.780 } 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "method": "nvmf_create_transport", 00:21:03.780 "params": { 00:21:03.780 "trtype": "TCP", 00:21:03.780 "max_queue_depth": 128, 00:21:03.780 "max_io_qpairs_per_ctrlr": 127, 00:21:03.780 "in_capsule_data_size": 4096, 00:21:03.780 "max_io_size": 131072, 00:21:03.780 "io_unit_size": 131072, 00:21:03.780 "max_aq_depth": 128, 00:21:03.780 "num_shared_buffers": 511, 00:21:03.780 "buf_cache_size": 4294967295, 00:21:03.780 "dif_insert_or_strip": false, 00:21:03.780 "zcopy": false, 00:21:03.780 "c2h_success": false, 00:21:03.780 "sock_priority": 0, 00:21:03.780 "abort_timeout_sec": 1, 00:21:03.780 "ack_timeout": 0, 00:21:03.780 "data_wr_pool_size": 0 00:21:03.780 } 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "method": "nvmf_create_subsystem", 00:21:03.780 "params": { 00:21:03.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.780 "allow_any_host": false, 00:21:03.780 "serial_number": "SPDK00000000000001", 00:21:03.780 "model_number": "SPDK bdev Controller", 00:21:03.780 "max_namespaces": 10, 00:21:03.780 "min_cntlid": 1, 00:21:03.780 "max_cntlid": 65519, 00:21:03.780 "ana_reporting": false 00:21:03.780 } 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "method": "nvmf_subsystem_add_host", 00:21:03.780 "params": { 00:21:03.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.780 "host": "nqn.2016-06.io.spdk:host1", 00:21:03.780 "psk": "/tmp/tmp.DlrdnXcx52" 00:21:03.780 } 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "method": "nvmf_subsystem_add_ns", 00:21:03.780 "params": { 00:21:03.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.780 "namespace": { 00:21:03.780 "nsid": 1, 00:21:03.780 "bdev_name": "malloc0", 00:21:03.780 "nguid": "D792B5FE298A4DE8920C9A6D788EDF04", 00:21:03.780 "uuid": "d792b5fe-298a-4de8-920c-9a6d788edf04", 00:21:03.780 "no_auto_visible": false 00:21:03.780 } 00:21:03.780 } 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "method": "nvmf_subsystem_add_listener", 00:21:03.780 "params": { 00:21:03.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.780 "listen_address": { 00:21:03.780 "trtype": "TCP", 00:21:03.780 "adrfam": "IPv4", 00:21:03.780 "traddr": "10.0.0.2", 00:21:03.780 "trsvcid": "4420" 00:21:03.780 }, 00:21:03.780 "secure_channel": true 00:21:03.780 } 00:21:03.780 } 00:21:03.780 ] 00:21:03.780 } 00:21:03.780 ] 00:21:03.780 }' 00:21:03.780 21:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:04.039 21:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:04.039 "subsystems": [ 00:21:04.039 { 00:21:04.039 "subsystem": "keyring", 00:21:04.039 "config": [] 00:21:04.039 }, 00:21:04.039 { 00:21:04.039 "subsystem": "iobuf", 00:21:04.039 "config": [ 00:21:04.039 { 00:21:04.039 "method": "iobuf_set_options", 00:21:04.039 "params": { 00:21:04.039 "small_pool_count": 8192, 00:21:04.039 "large_pool_count": 1024, 00:21:04.039 "small_bufsize": 8192, 00:21:04.039 "large_bufsize": 135168 00:21:04.039 } 00:21:04.039 } 00:21:04.039 ] 00:21:04.039 }, 00:21:04.039 { 00:21:04.039 "subsystem": "sock", 00:21:04.039 "config": [ 00:21:04.039 { 00:21:04.039 "method": "sock_set_default_impl", 00:21:04.039 "params": { 00:21:04.039 "impl_name": "posix" 00:21:04.039 } 00:21:04.039 }, 00:21:04.040 { 00:21:04.040 "method": "sock_impl_set_options", 00:21:04.040 "params": { 00:21:04.040 "impl_name": "ssl", 00:21:04.040 "recv_buf_size": 4096, 00:21:04.040 "send_buf_size": 4096, 00:21:04.040 "enable_recv_pipe": true, 00:21:04.040 "enable_quickack": false, 00:21:04.040 "enable_placement_id": 0, 00:21:04.040 "enable_zerocopy_send_server": true, 00:21:04.040 "enable_zerocopy_send_client": false, 00:21:04.040 "zerocopy_threshold": 0, 00:21:04.040 "tls_version": 0, 00:21:04.040 "enable_ktls": false 00:21:04.040 } 00:21:04.040 }, 00:21:04.040 { 00:21:04.040 "method": "sock_impl_set_options", 00:21:04.040 "params": { 00:21:04.040 "impl_name": "posix", 00:21:04.040 "recv_buf_size": 2097152, 00:21:04.040 "send_buf_size": 2097152, 00:21:04.040 "enable_recv_pipe": true, 00:21:04.040 "enable_quickack": false, 00:21:04.040 "enable_placement_id": 0, 00:21:04.040 "enable_zerocopy_send_server": true, 00:21:04.040 "enable_zerocopy_send_client": false, 00:21:04.040 "zerocopy_threshold": 0, 00:21:04.040 "tls_version": 0, 00:21:04.040 "enable_ktls": false 00:21:04.040 } 00:21:04.040 } 00:21:04.040 ] 00:21:04.040 }, 00:21:04.040 { 00:21:04.040 "subsystem": "vmd", 00:21:04.040 "config": [] 00:21:04.040 }, 00:21:04.040 { 00:21:04.040 "subsystem": "accel", 00:21:04.040 "config": [ 00:21:04.040 { 00:21:04.040 "method": "accel_set_options", 00:21:04.040 "params": { 00:21:04.040 "small_cache_size": 128, 00:21:04.040 "large_cache_size": 16, 00:21:04.040 "task_count": 2048, 00:21:04.040 "sequence_count": 2048, 00:21:04.040 "buf_count": 2048 00:21:04.040 } 00:21:04.040 } 00:21:04.040 ] 00:21:04.040 }, 00:21:04.040 { 00:21:04.040 "subsystem": "bdev", 00:21:04.040 "config": [ 00:21:04.040 { 00:21:04.040 "method": "bdev_set_options", 00:21:04.040 "params": { 00:21:04.040 "bdev_io_pool_size": 65535, 00:21:04.040 "bdev_io_cache_size": 256, 00:21:04.040 "bdev_auto_examine": true, 00:21:04.040 "iobuf_small_cache_size": 128, 00:21:04.040 "iobuf_large_cache_size": 16 00:21:04.040 } 00:21:04.040 }, 00:21:04.040 { 00:21:04.040 "method": "bdev_raid_set_options", 00:21:04.040 "params": { 00:21:04.040 "process_window_size_kb": 1024 00:21:04.040 } 00:21:04.040 }, 00:21:04.040 { 00:21:04.040 "method": "bdev_iscsi_set_options", 00:21:04.040 "params": { 00:21:04.040 "timeout_sec": 30 00:21:04.040 } 00:21:04.040 }, 00:21:04.040 { 00:21:04.040 "method": "bdev_nvme_set_options", 00:21:04.040 "params": { 00:21:04.040 "action_on_timeout": "none", 00:21:04.040 "timeout_us": 0, 00:21:04.040 "timeout_admin_us": 0, 00:21:04.040 "keep_alive_timeout_ms": 10000, 00:21:04.040 "arbitration_burst": 0, 00:21:04.040 "low_priority_weight": 0, 00:21:04.040 "medium_priority_weight": 0, 00:21:04.040 "high_priority_weight": 0, 00:21:04.040 "nvme_adminq_poll_period_us": 10000, 00:21:04.040 "nvme_ioq_poll_period_us": 0, 00:21:04.040 "io_queue_requests": 512, 00:21:04.040 "delay_cmd_submit": true, 00:21:04.040 "transport_retry_count": 4, 00:21:04.040 "bdev_retry_count": 3, 00:21:04.040 "transport_ack_timeout": 0, 00:21:04.040 "ctrlr_loss_timeout_sec": 0, 00:21:04.040 "reconnect_delay_sec": 0, 00:21:04.040 "fast_io_fail_timeout_sec": 0, 00:21:04.040 "disable_auto_failback": false, 00:21:04.040 "generate_uuids": false, 00:21:04.040 "transport_tos": 0, 00:21:04.040 "nvme_error_stat": false, 00:21:04.040 "rdma_srq_size": 0, 00:21:04.040 "io_path_stat": false, 00:21:04.040 "allow_accel_sequence": false, 00:21:04.040 "rdma_max_cq_size": 0, 00:21:04.040 "rdma_cm_event_timeout_ms": 0, 00:21:04.040 "dhchap_digests": [ 00:21:04.040 "sha256", 00:21:04.040 "sha384", 00:21:04.040 "sha512" 00:21:04.040 ], 00:21:04.040 "dhchap_dhgroups": [ 00:21:04.040 "null", 00:21:04.040 "ffdhe2048", 00:21:04.040 "ffdhe3072", 00:21:04.040 "ffdhe4096", 00:21:04.040 "ffdhe6144", 00:21:04.040 "ffdhe8192" 00:21:04.040 ] 00:21:04.040 } 00:21:04.040 }, 00:21:04.040 { 00:21:04.040 "method": "bdev_nvme_attach_controller", 00:21:04.040 "params": { 00:21:04.040 "name": "TLSTEST", 00:21:04.040 "trtype": "TCP", 00:21:04.040 "adrfam": "IPv4", 00:21:04.040 "traddr": "10.0.0.2", 00:21:04.040 "trsvcid": "4420", 00:21:04.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.040 "prchk_reftag": false, 00:21:04.040 "prchk_guard": false, 00:21:04.040 "ctrlr_loss_timeout_sec": 0, 00:21:04.040 "reconnect_delay_sec": 0, 00:21:04.040 "fast_io_fail_timeout_sec": 0, 00:21:04.040 "psk": "/tmp/tmp.DlrdnXcx52", 00:21:04.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.040 "hdgst": false, 00:21:04.040 "ddgst": false 00:21:04.040 } 00:21:04.040 }, 00:21:04.040 { 00:21:04.040 "method": "bdev_nvme_set_hotplug", 00:21:04.040 "params": { 00:21:04.040 "period_us": 100000, 00:21:04.040 "enable": false 00:21:04.040 } 00:21:04.040 }, 00:21:04.040 { 00:21:04.040 "method": "bdev_wait_for_examine" 00:21:04.040 } 00:21:04.040 ] 00:21:04.040 }, 00:21:04.040 { 00:21:04.040 "subsystem": "nbd", 00:21:04.040 "config": [] 00:21:04.040 } 00:21:04.040 ] 00:21:04.040 }' 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2217793 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2217793 ']' 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2217793 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2217793 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2217793' 00:21:04.040 killing process with pid 2217793 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2217793 00:21:04.040 Received shutdown signal, test time was about 10.000000 seconds 00:21:04.040 00:21:04.040 Latency(us) 00:21:04.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.040 =================================================================================================================== 00:21:04.040 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:04.040 [2024-07-15 21:36:53.715204] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2217793 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2217433 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2217433 ']' 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2217433 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.040 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2217433 00:21:04.300 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:04.300 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:04.300 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2217433' 00:21:04.300 killing process with pid 2217433 00:21:04.300 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2217433 00:21:04.300 [2024-07-15 21:36:53.880959] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:04.300 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2217433 00:21:04.300 21:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:04.300 21:36:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:04.300 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:04.300 21:36:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.300 21:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:04.300 "subsystems": [ 00:21:04.300 { 00:21:04.300 "subsystem": "keyring", 00:21:04.300 "config": [] 00:21:04.300 }, 00:21:04.300 { 00:21:04.300 "subsystem": "iobuf", 00:21:04.300 "config": [ 00:21:04.300 { 00:21:04.300 "method": "iobuf_set_options", 00:21:04.300 "params": { 00:21:04.300 "small_pool_count": 8192, 00:21:04.300 "large_pool_count": 1024, 00:21:04.300 "small_bufsize": 8192, 00:21:04.300 "large_bufsize": 135168 00:21:04.300 } 00:21:04.300 } 00:21:04.300 ] 00:21:04.300 }, 00:21:04.300 { 00:21:04.300 "subsystem": "sock", 00:21:04.300 "config": [ 00:21:04.300 { 00:21:04.300 "method": "sock_set_default_impl", 00:21:04.300 "params": { 00:21:04.300 "impl_name": "posix" 00:21:04.300 } 00:21:04.300 }, 00:21:04.300 { 00:21:04.300 "method": "sock_impl_set_options", 00:21:04.300 "params": { 00:21:04.300 "impl_name": "ssl", 00:21:04.300 "recv_buf_size": 4096, 00:21:04.300 "send_buf_size": 4096, 00:21:04.300 "enable_recv_pipe": true, 00:21:04.300 "enable_quickack": false, 00:21:04.300 "enable_placement_id": 0, 00:21:04.300 "enable_zerocopy_send_server": true, 00:21:04.300 "enable_zerocopy_send_client": false, 00:21:04.300 "zerocopy_threshold": 0, 00:21:04.300 "tls_version": 0, 00:21:04.300 "enable_ktls": false 00:21:04.300 } 00:21:04.300 }, 00:21:04.300 { 00:21:04.300 "method": "sock_impl_set_options", 00:21:04.300 "params": { 00:21:04.300 "impl_name": "posix", 00:21:04.300 "recv_buf_size": 2097152, 00:21:04.300 "send_buf_size": 2097152, 00:21:04.300 "enable_recv_pipe": true, 00:21:04.300 "enable_quickack": false, 00:21:04.300 "enable_placement_id": 0, 00:21:04.300 "enable_zerocopy_send_server": true, 00:21:04.300 "enable_zerocopy_send_client": false, 00:21:04.300 "zerocopy_threshold": 0, 00:21:04.300 "tls_version": 0, 00:21:04.300 "enable_ktls": false 00:21:04.300 } 00:21:04.300 } 00:21:04.300 ] 00:21:04.300 }, 00:21:04.300 { 00:21:04.300 "subsystem": "vmd", 00:21:04.300 "config": [] 00:21:04.300 }, 00:21:04.300 { 00:21:04.300 "subsystem": "accel", 00:21:04.300 "config": [ 00:21:04.300 { 00:21:04.300 "method": "accel_set_options", 00:21:04.300 "params": { 00:21:04.300 "small_cache_size": 128, 00:21:04.300 "large_cache_size": 16, 00:21:04.300 "task_count": 2048, 00:21:04.300 "sequence_count": 2048, 00:21:04.300 "buf_count": 2048 00:21:04.300 } 00:21:04.300 } 00:21:04.300 ] 00:21:04.300 }, 00:21:04.300 { 00:21:04.300 "subsystem": "bdev", 00:21:04.300 "config": [ 00:21:04.300 { 00:21:04.300 "method": "bdev_set_options", 00:21:04.300 "params": { 00:21:04.300 "bdev_io_pool_size": 65535, 00:21:04.300 "bdev_io_cache_size": 256, 00:21:04.300 "bdev_auto_examine": true, 00:21:04.300 "iobuf_small_cache_size": 128, 00:21:04.300 "iobuf_large_cache_size": 16 00:21:04.300 } 00:21:04.300 }, 00:21:04.300 { 00:21:04.300 "method": "bdev_raid_set_options", 00:21:04.300 "params": { 00:21:04.300 "process_window_size_kb": 1024 00:21:04.300 } 00:21:04.300 }, 00:21:04.300 { 00:21:04.300 "method": "bdev_iscsi_set_options", 00:21:04.300 "params": { 00:21:04.300 "timeout_sec": 30 00:21:04.300 } 00:21:04.300 }, 00:21:04.300 { 00:21:04.300 "method": "bdev_nvme_set_options", 00:21:04.300 "params": { 00:21:04.300 "action_on_timeout": "none", 00:21:04.300 "timeout_us": 0, 00:21:04.300 "timeout_admin_us": 0, 00:21:04.300 "keep_alive_timeout_ms": 10000, 00:21:04.300 "arbitration_burst": 0, 00:21:04.300 "low_priority_weight": 0, 00:21:04.300 "medium_priority_weight": 0, 00:21:04.300 "high_priority_weight": 0, 00:21:04.300 "nvme_adminq_poll_period_us": 10000, 00:21:04.300 "nvme_ioq_poll_period_us": 0, 00:21:04.300 "io_queue_requests": 0, 00:21:04.300 "delay_cmd_submit": true, 00:21:04.300 "transport_retry_count": 4, 00:21:04.300 "bdev_retry_count": 3, 00:21:04.300 "transport_ack_timeout": 0, 00:21:04.300 "ctrlr_loss_timeout_sec": 0, 00:21:04.300 "reconnect_delay_sec": 0, 00:21:04.300 "fast_io_fail_timeout_sec": 0, 00:21:04.300 "disable_auto_failback": false, 00:21:04.300 "generate_uuids": false, 00:21:04.300 "transport_tos": 0, 00:21:04.300 "nvme_error_stat": false, 00:21:04.300 "rdma_srq_size": 0, 00:21:04.300 "io_path_stat": false, 00:21:04.300 "allow_accel_sequence": false, 00:21:04.300 "rdma_max_cq_size": 0, 00:21:04.300 "rdma_cm_event_timeout_ms": 0, 00:21:04.300 "dhchap_digests": [ 00:21:04.300 "sha256", 00:21:04.300 "sha384", 00:21:04.301 "sha512" 00:21:04.301 ], 00:21:04.301 "dhchap_dhgroups": [ 00:21:04.301 "null", 00:21:04.301 "ffdhe2048", 00:21:04.301 "ffdhe3072", 00:21:04.301 "ffdhe4096", 00:21:04.301 "ffdhe6144", 00:21:04.301 "ffdhe8192" 00:21:04.301 ] 00:21:04.301 } 00:21:04.301 }, 00:21:04.301 { 00:21:04.301 "method": "bdev_nvme_set_hotplug", 00:21:04.301 "params": { 00:21:04.301 "period_us": 100000, 00:21:04.301 "enable": false 00:21:04.301 } 00:21:04.301 }, 00:21:04.301 { 00:21:04.301 "method": "bdev_malloc_create", 00:21:04.301 "params": { 00:21:04.301 "name": "malloc0", 00:21:04.301 "num_blocks": 8192, 00:21:04.301 "block_size": 4096, 00:21:04.301 "physical_block_size": 4096, 00:21:04.301 "uuid": "d792b5fe-298a-4de8-920c-9a6d788edf04", 00:21:04.301 "optimal_io_boundary": 0 00:21:04.301 } 00:21:04.301 }, 00:21:04.301 { 00:21:04.301 "method": "bdev_wait_for_examine" 00:21:04.301 } 00:21:04.301 ] 00:21:04.301 }, 00:21:04.301 { 00:21:04.301 "subsystem": "nbd", 00:21:04.301 "config": [] 00:21:04.301 }, 00:21:04.301 { 00:21:04.301 "subsystem": "scheduler", 00:21:04.301 "config": [ 00:21:04.301 { 00:21:04.301 "method": "framework_set_scheduler", 00:21:04.301 "params": { 00:21:04.301 "name": "static" 00:21:04.301 } 00:21:04.301 } 00:21:04.301 ] 00:21:04.301 }, 00:21:04.301 { 00:21:04.301 "subsystem": "nvmf", 00:21:04.301 "config": [ 00:21:04.301 { 00:21:04.301 "method": "nvmf_set_config", 00:21:04.301 "params": { 00:21:04.301 "discovery_filter": "match_any", 00:21:04.301 "admin_cmd_passthru": { 00:21:04.301 "identify_ctrlr": false 00:21:04.301 } 00:21:04.301 } 00:21:04.301 }, 00:21:04.301 { 00:21:04.301 "method": "nvmf_set_max_subsystems", 00:21:04.301 "params": { 00:21:04.301 "max_subsystems": 1024 00:21:04.301 } 00:21:04.301 }, 00:21:04.301 { 00:21:04.301 "method": "nvmf_set_crdt", 00:21:04.301 "params": { 00:21:04.301 "crdt1": 0, 00:21:04.301 "crdt2": 0, 00:21:04.301 "crdt3": 0 00:21:04.301 } 00:21:04.301 }, 00:21:04.301 { 00:21:04.301 "method": "nvmf_create_transport", 00:21:04.301 "params": { 00:21:04.301 "trtype": "TCP", 00:21:04.301 "max_queue_depth": 128, 00:21:04.301 "max_io_qpairs_per_ctrlr": 127, 00:21:04.301 "in_capsule_data_size": 4096, 00:21:04.301 "max_io_size": 131072, 00:21:04.301 "io_unit_size": 131072, 00:21:04.301 "max_aq_depth": 128, 00:21:04.301 "num_shared_buffers": 511, 00:21:04.301 "buf_cache_size": 4294967295, 00:21:04.301 "dif_insert_or_strip": false, 00:21:04.301 "zcopy": false, 00:21:04.301 "c2h_success": false, 00:21:04.301 "sock_priority": 0, 00:21:04.301 "abort_timeout_sec": 1, 00:21:04.301 "ack_timeout": 0, 00:21:04.301 "data_wr_pool_size": 0 00:21:04.301 } 00:21:04.301 }, 00:21:04.301 { 00:21:04.301 "method": "nvmf_create_subsystem", 00:21:04.301 "params": { 00:21:04.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.301 "allow_any_host": false, 00:21:04.301 "serial_number": "SPDK00000000000001", 00:21:04.301 "model_number": "SPDK bdev Controller", 00:21:04.301 "max_namespaces": 10, 00:21:04.301 "min_cntlid": 1, 00:21:04.301 "max_cntlid": 65519, 00:21:04.301 "ana_reporting": false 00:21:04.301 } 00:21:04.301 }, 00:21:04.301 { 00:21:04.301 "method": "nvmf_subsystem_add_host", 00:21:04.301 "params": { 00:21:04.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.301 "host": "nqn.2016-06.io.spdk:host1", 00:21:04.301 "psk": "/tmp/tmp.DlrdnXcx52" 00:21:04.301 } 00:21:04.301 }, 00:21:04.301 { 00:21:04.301 "method": "nvmf_subsystem_add_ns", 00:21:04.301 "params": { 00:21:04.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.301 "namespace": { 00:21:04.301 "nsid": 1, 00:21:04.301 "bdev_name": "malloc0", 00:21:04.301 "nguid": "D792B5FE298A4DE8920C9A6D788EDF04", 00:21:04.301 "uuid": "d792b5fe-298a-4de8-920c-9a6d788edf04", 00:21:04.301 "no_auto_visible": false 00:21:04.301 } 00:21:04.301 } 00:21:04.301 }, 00:21:04.301 { 00:21:04.301 "method": "nvmf_subsystem_add_listener", 00:21:04.301 "params": { 00:21:04.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.301 "listen_address": { 00:21:04.301 "trtype": "TCP", 00:21:04.301 "adrfam": "IPv4", 00:21:04.301 "traddr": "10.0.0.2", 00:21:04.301 "trsvcid": "4420" 00:21:04.301 }, 00:21:04.301 "secure_channel": true 00:21:04.301 } 00:21:04.301 } 00:21:04.301 ] 00:21:04.301 } 00:21:04.301 ] 00:21:04.301 }' 00:21:04.301 21:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2218145 00:21:04.301 21:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2218145 00:21:04.301 21:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:04.301 21:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2218145 ']' 00:21:04.301 21:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.301 21:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.301 21:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.301 21:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.301 21:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.301 [2024-07-15 21:36:54.058105] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:04.301 [2024-07-15 21:36:54.058169] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.301 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.561 [2024-07-15 21:36:54.138149] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.561 [2024-07-15 21:36:54.194279] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.561 [2024-07-15 21:36:54.194312] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.561 [2024-07-15 21:36:54.194318] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.561 [2024-07-15 21:36:54.194322] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.561 [2024-07-15 21:36:54.194326] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.561 [2024-07-15 21:36:54.194377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.822 [2024-07-15 21:36:54.377702] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.822 [2024-07-15 21:36:54.393676] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:04.822 [2024-07-15 21:36:54.409728] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:04.822 [2024-07-15 21:36:54.423300] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.082 21:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.082 21:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:05.082 21:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:05.082 21:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:05.082 21:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.082 21:36:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.082 21:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2218494 00:21:05.083 21:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2218494 /var/tmp/bdevperf.sock 00:21:05.083 21:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2218494 ']' 00:21:05.083 21:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.083 21:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.083 21:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.083 21:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.083 21:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:05.083 21:36:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.083 21:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:05.083 "subsystems": [ 00:21:05.083 { 00:21:05.083 "subsystem": "keyring", 00:21:05.083 "config": [] 00:21:05.083 }, 00:21:05.083 { 00:21:05.083 "subsystem": "iobuf", 00:21:05.083 "config": [ 00:21:05.083 { 00:21:05.083 "method": "iobuf_set_options", 00:21:05.083 "params": { 00:21:05.083 "small_pool_count": 8192, 00:21:05.083 "large_pool_count": 1024, 00:21:05.083 "small_bufsize": 8192, 00:21:05.083 "large_bufsize": 135168 00:21:05.083 } 00:21:05.083 } 00:21:05.083 ] 00:21:05.083 }, 00:21:05.083 { 00:21:05.083 "subsystem": "sock", 00:21:05.083 "config": [ 00:21:05.083 { 00:21:05.083 "method": "sock_set_default_impl", 00:21:05.083 "params": { 00:21:05.083 "impl_name": "posix" 00:21:05.083 } 00:21:05.083 }, 00:21:05.083 { 00:21:05.083 "method": "sock_impl_set_options", 00:21:05.083 "params": { 00:21:05.083 "impl_name": "ssl", 00:21:05.083 "recv_buf_size": 4096, 00:21:05.083 "send_buf_size": 4096, 00:21:05.083 "enable_recv_pipe": true, 00:21:05.083 "enable_quickack": false, 00:21:05.083 "enable_placement_id": 0, 00:21:05.083 "enable_zerocopy_send_server": true, 00:21:05.083 "enable_zerocopy_send_client": false, 00:21:05.083 "zerocopy_threshold": 0, 00:21:05.083 "tls_version": 0, 00:21:05.083 "enable_ktls": false 00:21:05.083 } 00:21:05.083 }, 00:21:05.083 { 00:21:05.083 "method": "sock_impl_set_options", 00:21:05.083 "params": { 00:21:05.083 "impl_name": "posix", 00:21:05.083 "recv_buf_size": 2097152, 00:21:05.083 "send_buf_size": 2097152, 00:21:05.083 "enable_recv_pipe": true, 00:21:05.083 "enable_quickack": false, 00:21:05.083 "enable_placement_id": 0, 00:21:05.083 "enable_zerocopy_send_server": true, 00:21:05.083 "enable_zerocopy_send_client": false, 00:21:05.083 "zerocopy_threshold": 0, 00:21:05.083 "tls_version": 0, 00:21:05.083 "enable_ktls": false 00:21:05.083 } 00:21:05.083 } 00:21:05.083 ] 00:21:05.083 }, 00:21:05.083 { 00:21:05.083 "subsystem": "vmd", 00:21:05.083 "config": [] 00:21:05.083 }, 00:21:05.083 { 00:21:05.083 "subsystem": "accel", 00:21:05.083 "config": [ 00:21:05.083 { 00:21:05.083 "method": "accel_set_options", 00:21:05.083 "params": { 00:21:05.083 "small_cache_size": 128, 00:21:05.083 "large_cache_size": 16, 00:21:05.083 "task_count": 2048, 00:21:05.083 "sequence_count": 2048, 00:21:05.083 "buf_count": 2048 00:21:05.083 } 00:21:05.083 } 00:21:05.083 ] 00:21:05.083 }, 00:21:05.083 { 00:21:05.083 "subsystem": "bdev", 00:21:05.083 "config": [ 00:21:05.083 { 00:21:05.083 "method": "bdev_set_options", 00:21:05.083 "params": { 00:21:05.083 "bdev_io_pool_size": 65535, 00:21:05.083 "bdev_io_cache_size": 256, 00:21:05.083 "bdev_auto_examine": true, 00:21:05.083 "iobuf_small_cache_size": 128, 00:21:05.083 "iobuf_large_cache_size": 16 00:21:05.083 } 00:21:05.083 }, 00:21:05.083 { 00:21:05.083 "method": "bdev_raid_set_options", 00:21:05.083 "params": { 00:21:05.083 "process_window_size_kb": 1024 00:21:05.083 } 00:21:05.083 }, 00:21:05.083 { 00:21:05.083 "method": "bdev_iscsi_set_options", 00:21:05.083 "params": { 00:21:05.083 "timeout_sec": 30 00:21:05.083 } 00:21:05.083 }, 00:21:05.083 { 00:21:05.083 "method": "bdev_nvme_set_options", 00:21:05.083 "params": { 00:21:05.083 "action_on_timeout": "none", 00:21:05.083 "timeout_us": 0, 00:21:05.083 "timeout_admin_us": 0, 00:21:05.083 "keep_alive_timeout_ms": 10000, 00:21:05.083 "arbitration_burst": 0, 00:21:05.083 "low_priority_weight": 0, 00:21:05.083 "medium_priority_weight": 0, 00:21:05.083 "high_priority_weight": 0, 00:21:05.083 "nvme_adminq_poll_period_us": 10000, 00:21:05.083 "nvme_ioq_poll_period_us": 0, 00:21:05.083 "io_queue_requests": 512, 00:21:05.083 "delay_cmd_submit": true, 00:21:05.083 "transport_retry_count": 4, 00:21:05.083 "bdev_retry_count": 3, 00:21:05.083 "transport_ack_timeout": 0, 00:21:05.083 "ctrlr_loss_timeout_sec": 0, 00:21:05.083 "reconnect_delay_sec": 0, 00:21:05.083 "fast_io_fail_timeout_sec": 0, 00:21:05.083 "disable_auto_failback": false, 00:21:05.083 "generate_uuids": false, 00:21:05.083 "transport_tos": 0, 00:21:05.083 "nvme_error_stat": false, 00:21:05.083 "rdma_srq_size": 0, 00:21:05.083 "io_path_stat": false, 00:21:05.083 "allow_accel_sequence": false, 00:21:05.083 "rdma_max_cq_size": 0, 00:21:05.083 "rdma_cm_event_timeout_ms": 0, 00:21:05.083 "dhchap_digests": [ 00:21:05.083 "sha256", 00:21:05.083 "sha384", 00:21:05.083 "sha512" 00:21:05.083 ], 00:21:05.083 "dhchap_dhgroups": [ 00:21:05.083 "null", 00:21:05.083 "ffdhe2048", 00:21:05.083 "ffdhe3072", 00:21:05.083 "ffdhe4096", 00:21:05.083 "ffdhe6144", 00:21:05.083 "ffdhe8192" 00:21:05.083 ] 00:21:05.083 } 00:21:05.083 }, 00:21:05.083 { 00:21:05.083 "method": "bdev_nvme_attach_controller", 00:21:05.083 "params": { 00:21:05.083 "name": "TLSTEST", 00:21:05.083 "trtype": "TCP", 00:21:05.083 "adrfam": "IPv4", 00:21:05.083 "traddr": "10.0.0.2", 00:21:05.083 "trsvcid": "4420", 00:21:05.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.083 "prchk_reftag": false, 00:21:05.083 "prchk_guard": false, 00:21:05.083 "ctrlr_loss_timeout_sec": 0, 00:21:05.083 "reconnect_delay_sec": 0, 00:21:05.083 "fast_io_fail_timeout_sec": 0, 00:21:05.083 "psk": "/tmp/tmp.DlrdnXcx52", 00:21:05.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.083 "hdgst": false, 00:21:05.083 "ddgst": false 00:21:05.083 } 00:21:05.083 }, 00:21:05.083 { 00:21:05.083 "method": "bdev_nvme_set_hotplug", 00:21:05.083 "params": { 00:21:05.083 "period_us": 100000, 00:21:05.083 "enable": false 00:21:05.083 } 00:21:05.083 }, 00:21:05.083 { 00:21:05.083 "method": "bdev_wait_for_examine" 00:21:05.083 } 00:21:05.083 ] 00:21:05.083 }, 00:21:05.083 { 00:21:05.083 "subsystem": "nbd", 00:21:05.083 "config": [] 00:21:05.083 } 00:21:05.083 ] 00:21:05.083 }' 00:21:05.344 [2024-07-15 21:36:54.902429] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:05.344 [2024-07-15 21:36:54.902483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218494 ] 00:21:05.344 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.344 [2024-07-15 21:36:54.952551] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.344 [2024-07-15 21:36:55.004787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.344 [2024-07-15 21:36:55.128935] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.344 [2024-07-15 21:36:55.129002] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:05.914 21:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.914 21:36:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:05.914 21:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:06.198 Running I/O for 10 seconds... 00:21:16.200 00:21:16.200 Latency(us) 00:21:16.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.200 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:16.200 Verification LBA range: start 0x0 length 0x2000 00:21:16.200 TLSTESTn1 : 10.08 2423.24 9.47 0.00 0.00 52628.21 6144.00 95245.65 00:21:16.200 =================================================================================================================== 00:21:16.200 Total : 2423.24 9.47 0.00 0.00 52628.21 6144.00 95245.65 00:21:16.200 0 00:21:16.200 21:37:05 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:16.200 21:37:05 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2218494 00:21:16.200 21:37:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2218494 ']' 00:21:16.200 21:37:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2218494 00:21:16.200 21:37:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:16.200 21:37:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.200 21:37:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2218494 00:21:16.200 21:37:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:16.200 21:37:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:16.200 21:37:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2218494' 00:21:16.200 killing process with pid 2218494 00:21:16.200 21:37:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2218494 00:21:16.200 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.200 00:21:16.200 Latency(us) 00:21:16.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.200 =================================================================================================================== 00:21:16.200 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.200 [2024-07-15 21:37:05.977401] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:16.200 21:37:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2218494 00:21:16.461 21:37:06 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2218145 00:21:16.461 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2218145 ']' 00:21:16.461 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2218145 00:21:16.461 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:16.461 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.461 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2218145 00:21:16.461 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:16.461 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:16.461 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2218145' 00:21:16.461 killing process with pid 2218145 00:21:16.461 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2218145 00:21:16.461 [2024-07-15 21:37:06.146684] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:16.461 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2218145 00:21:16.722 21:37:06 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:16.722 21:37:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.722 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:16.722 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.722 21:37:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2220550 00:21:16.722 21:37:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2220550 00:21:16.722 21:37:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:16.722 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2220550 ']' 00:21:16.722 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.722 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.722 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.722 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.722 21:37:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.722 [2024-07-15 21:37:06.338605] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:16.722 [2024-07-15 21:37:06.338662] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.722 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.722 [2024-07-15 21:37:06.404231] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.722 [2024-07-15 21:37:06.469588] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.722 [2024-07-15 21:37:06.469626] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.722 [2024-07-15 21:37:06.469633] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.722 [2024-07-15 21:37:06.469640] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.722 [2024-07-15 21:37:06.469645] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.722 [2024-07-15 21:37:06.469664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.664 21:37:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.664 21:37:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:17.664 21:37:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.664 21:37:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:17.664 21:37:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.664 21:37:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.664 21:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.DlrdnXcx52 00:21:17.664 21:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.DlrdnXcx52 00:21:17.664 21:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:17.664 [2024-07-15 21:37:07.276903] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.664 21:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:17.664 21:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:17.925 [2024-07-15 21:37:07.565629] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:17.925 [2024-07-15 21:37:07.565850] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.925 21:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:17.925 malloc0 00:21:18.185 21:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:18.185 21:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DlrdnXcx52 00:21:18.446 [2024-07-15 21:37:08.033772] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:18.446 21:37:08 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:18.446 21:37:08 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2220945 00:21:18.446 21:37:08 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.446 21:37:08 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2220945 /var/tmp/bdevperf.sock 00:21:18.446 21:37:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2220945 ']' 00:21:18.446 21:37:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.446 21:37:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:18.446 21:37:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.446 21:37:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:18.446 21:37:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.446 [2024-07-15 21:37:08.078174] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:18.446 [2024-07-15 21:37:08.078222] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220945 ] 00:21:18.446 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.446 [2024-07-15 21:37:08.153531] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.446 [2024-07-15 21:37:08.207438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.388 21:37:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:19.388 21:37:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:19.388 21:37:08 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DlrdnXcx52 00:21:19.388 21:37:09 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:19.388 [2024-07-15 21:37:09.153421] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.649 nvme0n1 00:21:19.649 21:37:09 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:19.649 Running I/O for 1 seconds... 00:21:21.035 00:21:21.035 Latency(us) 00:21:21.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.035 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:21.035 Verification LBA range: start 0x0 length 0x2000 00:21:21.035 nvme0n1 : 1.07 1872.37 7.31 0.00 0.00 66515.86 6062.08 130198.19 00:21:21.035 =================================================================================================================== 00:21:21.035 Total : 1872.37 7.31 0.00 0.00 66515.86 6062.08 130198.19 00:21:21.035 0 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2220945 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2220945 ']' 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2220945 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2220945 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2220945' 00:21:21.035 killing process with pid 2220945 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2220945 00:21:21.035 Received shutdown signal, test time was about 1.000000 seconds 00:21:21.035 00:21:21.035 Latency(us) 00:21:21.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.035 =================================================================================================================== 00:21:21.035 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2220945 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2220550 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2220550 ']' 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2220550 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2220550 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:21.035 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:21.036 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2220550' 00:21:21.036 killing process with pid 2220550 00:21:21.036 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2220550 00:21:21.036 [2024-07-15 21:37:10.651988] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:21.036 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2220550 00:21:21.036 21:37:10 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:21.036 21:37:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:21.036 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:21.036 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.036 21:37:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2221559 00:21:21.036 21:37:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2221559 00:21:21.036 21:37:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:21.036 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2221559 ']' 00:21:21.036 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.036 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.036 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.036 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.036 21:37:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.297 [2024-07-15 21:37:10.850948] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:21.297 [2024-07-15 21:37:10.851001] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.297 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.297 [2024-07-15 21:37:10.914585] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.297 [2024-07-15 21:37:10.979065] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.297 [2024-07-15 21:37:10.979104] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.297 [2024-07-15 21:37:10.979112] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.297 [2024-07-15 21:37:10.979118] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.297 [2024-07-15 21:37:10.979129] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.297 [2024-07-15 21:37:10.979148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.870 21:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.870 21:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:21.870 21:37:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.870 21:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.870 21:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.870 21:37:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.870 21:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:21.870 21:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.870 21:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.870 [2024-07-15 21:37:11.653595] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.870 malloc0 00:21:22.131 [2024-07-15 21:37:11.680335] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.131 [2024-07-15 21:37:11.680534] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.131 21:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.131 21:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2221752 00:21:22.131 21:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2221752 /var/tmp/bdevperf.sock 00:21:22.131 21:37:11 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:22.131 21:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2221752 ']' 00:21:22.131 21:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.131 21:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.131 21:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.131 21:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.131 21:37:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.131 [2024-07-15 21:37:11.755962] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:22.131 [2024-07-15 21:37:11.756008] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221752 ] 00:21:22.131 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.131 [2024-07-15 21:37:11.830050] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.131 [2024-07-15 21:37:11.883832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.071 21:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.071 21:37:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:23.071 21:37:12 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DlrdnXcx52 00:21:23.071 21:37:12 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:23.071 [2024-07-15 21:37:12.809523] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.331 nvme0n1 00:21:23.331 21:37:12 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:23.331 Running I/O for 1 seconds... 00:21:24.272 00:21:24.272 Latency(us) 00:21:24.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.272 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:24.272 Verification LBA range: start 0x0 length 0x2000 00:21:24.272 nvme0n1 : 1.05 2520.99 9.85 0.00 0.00 49654.04 4396.37 124955.31 00:21:24.272 =================================================================================================================== 00:21:24.272 Total : 2520.99 9.85 0.00 0.00 49654.04 4396.37 124955.31 00:21:24.272 0 00:21:24.272 21:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:24.272 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.272 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.555 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.555 21:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:24.555 "subsystems": [ 00:21:24.555 { 00:21:24.555 "subsystem": "keyring", 00:21:24.555 "config": [ 00:21:24.555 { 00:21:24.555 "method": "keyring_file_add_key", 00:21:24.555 "params": { 00:21:24.555 "name": "key0", 00:21:24.555 "path": "/tmp/tmp.DlrdnXcx52" 00:21:24.555 } 00:21:24.555 } 00:21:24.555 ] 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "subsystem": "iobuf", 00:21:24.555 "config": [ 00:21:24.555 { 00:21:24.555 "method": "iobuf_set_options", 00:21:24.555 "params": { 00:21:24.555 "small_pool_count": 8192, 00:21:24.555 "large_pool_count": 1024, 00:21:24.555 "small_bufsize": 8192, 00:21:24.555 "large_bufsize": 135168 00:21:24.555 } 00:21:24.555 } 00:21:24.555 ] 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "subsystem": "sock", 00:21:24.555 "config": [ 00:21:24.555 { 00:21:24.555 "method": "sock_set_default_impl", 00:21:24.555 "params": { 00:21:24.555 "impl_name": "posix" 00:21:24.555 } 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "method": "sock_impl_set_options", 00:21:24.555 "params": { 00:21:24.555 "impl_name": "ssl", 00:21:24.555 "recv_buf_size": 4096, 00:21:24.555 "send_buf_size": 4096, 00:21:24.555 "enable_recv_pipe": true, 00:21:24.555 "enable_quickack": false, 00:21:24.555 "enable_placement_id": 0, 00:21:24.555 "enable_zerocopy_send_server": true, 00:21:24.555 "enable_zerocopy_send_client": false, 00:21:24.555 "zerocopy_threshold": 0, 00:21:24.555 "tls_version": 0, 00:21:24.555 "enable_ktls": false 00:21:24.555 } 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "method": "sock_impl_set_options", 00:21:24.555 "params": { 00:21:24.555 "impl_name": "posix", 00:21:24.555 "recv_buf_size": 2097152, 00:21:24.555 "send_buf_size": 2097152, 00:21:24.555 "enable_recv_pipe": true, 00:21:24.555 "enable_quickack": false, 00:21:24.555 "enable_placement_id": 0, 00:21:24.555 "enable_zerocopy_send_server": true, 00:21:24.555 "enable_zerocopy_send_client": false, 00:21:24.555 "zerocopy_threshold": 0, 00:21:24.555 "tls_version": 0, 00:21:24.555 "enable_ktls": false 00:21:24.555 } 00:21:24.555 } 00:21:24.555 ] 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "subsystem": "vmd", 00:21:24.555 "config": [] 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "subsystem": "accel", 00:21:24.555 "config": [ 00:21:24.555 { 00:21:24.555 "method": "accel_set_options", 00:21:24.555 "params": { 00:21:24.555 "small_cache_size": 128, 00:21:24.555 "large_cache_size": 16, 00:21:24.555 "task_count": 2048, 00:21:24.555 "sequence_count": 2048, 00:21:24.555 "buf_count": 2048 00:21:24.555 } 00:21:24.555 } 00:21:24.555 ] 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "subsystem": "bdev", 00:21:24.555 "config": [ 00:21:24.555 { 00:21:24.555 "method": "bdev_set_options", 00:21:24.555 "params": { 00:21:24.555 "bdev_io_pool_size": 65535, 00:21:24.555 "bdev_io_cache_size": 256, 00:21:24.555 "bdev_auto_examine": true, 00:21:24.555 "iobuf_small_cache_size": 128, 00:21:24.555 "iobuf_large_cache_size": 16 00:21:24.555 } 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "method": "bdev_raid_set_options", 00:21:24.555 "params": { 00:21:24.555 "process_window_size_kb": 1024 00:21:24.555 } 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "method": "bdev_iscsi_set_options", 00:21:24.555 "params": { 00:21:24.555 "timeout_sec": 30 00:21:24.555 } 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "method": "bdev_nvme_set_options", 00:21:24.555 "params": { 00:21:24.555 "action_on_timeout": "none", 00:21:24.555 "timeout_us": 0, 00:21:24.555 "timeout_admin_us": 0, 00:21:24.555 "keep_alive_timeout_ms": 10000, 00:21:24.555 "arbitration_burst": 0, 00:21:24.555 "low_priority_weight": 0, 00:21:24.555 "medium_priority_weight": 0, 00:21:24.555 "high_priority_weight": 0, 00:21:24.555 "nvme_adminq_poll_period_us": 10000, 00:21:24.555 "nvme_ioq_poll_period_us": 0, 00:21:24.555 "io_queue_requests": 0, 00:21:24.555 "delay_cmd_submit": true, 00:21:24.555 "transport_retry_count": 4, 00:21:24.555 "bdev_retry_count": 3, 00:21:24.555 "transport_ack_timeout": 0, 00:21:24.555 "ctrlr_loss_timeout_sec": 0, 00:21:24.555 "reconnect_delay_sec": 0, 00:21:24.555 "fast_io_fail_timeout_sec": 0, 00:21:24.555 "disable_auto_failback": false, 00:21:24.555 "generate_uuids": false, 00:21:24.555 "transport_tos": 0, 00:21:24.555 "nvme_error_stat": false, 00:21:24.555 "rdma_srq_size": 0, 00:21:24.555 "io_path_stat": false, 00:21:24.555 "allow_accel_sequence": false, 00:21:24.555 "rdma_max_cq_size": 0, 00:21:24.555 "rdma_cm_event_timeout_ms": 0, 00:21:24.555 "dhchap_digests": [ 00:21:24.555 "sha256", 00:21:24.555 "sha384", 00:21:24.555 "sha512" 00:21:24.555 ], 00:21:24.555 "dhchap_dhgroups": [ 00:21:24.555 "null", 00:21:24.555 "ffdhe2048", 00:21:24.555 "ffdhe3072", 00:21:24.555 "ffdhe4096", 00:21:24.555 "ffdhe6144", 00:21:24.555 "ffdhe8192" 00:21:24.555 ] 00:21:24.555 } 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "method": "bdev_nvme_set_hotplug", 00:21:24.555 "params": { 00:21:24.555 "period_us": 100000, 00:21:24.555 "enable": false 00:21:24.555 } 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "method": "bdev_malloc_create", 00:21:24.555 "params": { 00:21:24.555 "name": "malloc0", 00:21:24.555 "num_blocks": 8192, 00:21:24.555 "block_size": 4096, 00:21:24.555 "physical_block_size": 4096, 00:21:24.555 "uuid": "48f54c8c-26b8-43d0-969c-833b9b3fc54d", 00:21:24.555 "optimal_io_boundary": 0 00:21:24.555 } 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "method": "bdev_wait_for_examine" 00:21:24.555 } 00:21:24.555 ] 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "subsystem": "nbd", 00:21:24.555 "config": [] 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "subsystem": "scheduler", 00:21:24.555 "config": [ 00:21:24.555 { 00:21:24.555 "method": "framework_set_scheduler", 00:21:24.555 "params": { 00:21:24.555 "name": "static" 00:21:24.555 } 00:21:24.555 } 00:21:24.555 ] 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "subsystem": "nvmf", 00:21:24.555 "config": [ 00:21:24.555 { 00:21:24.555 "method": "nvmf_set_config", 00:21:24.555 "params": { 00:21:24.555 "discovery_filter": "match_any", 00:21:24.555 "admin_cmd_passthru": { 00:21:24.555 "identify_ctrlr": false 00:21:24.555 } 00:21:24.555 } 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "method": "nvmf_set_max_subsystems", 00:21:24.555 "params": { 00:21:24.555 "max_subsystems": 1024 00:21:24.555 } 00:21:24.555 }, 00:21:24.555 { 00:21:24.555 "method": "nvmf_set_crdt", 00:21:24.556 "params": { 00:21:24.556 "crdt1": 0, 00:21:24.556 "crdt2": 0, 00:21:24.556 "crdt3": 0 00:21:24.556 } 00:21:24.556 }, 00:21:24.556 { 00:21:24.556 "method": "nvmf_create_transport", 00:21:24.556 "params": { 00:21:24.556 "trtype": "TCP", 00:21:24.556 "max_queue_depth": 128, 00:21:24.556 "max_io_qpairs_per_ctrlr": 127, 00:21:24.556 "in_capsule_data_size": 4096, 00:21:24.556 "max_io_size": 131072, 00:21:24.556 "io_unit_size": 131072, 00:21:24.556 "max_aq_depth": 128, 00:21:24.556 "num_shared_buffers": 511, 00:21:24.556 "buf_cache_size": 4294967295, 00:21:24.556 "dif_insert_or_strip": false, 00:21:24.556 "zcopy": false, 00:21:24.556 "c2h_success": false, 00:21:24.556 "sock_priority": 0, 00:21:24.556 "abort_timeout_sec": 1, 00:21:24.556 "ack_timeout": 0, 00:21:24.556 "data_wr_pool_size": 0 00:21:24.556 } 00:21:24.556 }, 00:21:24.556 { 00:21:24.556 "method": "nvmf_create_subsystem", 00:21:24.556 "params": { 00:21:24.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.556 "allow_any_host": false, 00:21:24.556 "serial_number": "00000000000000000000", 00:21:24.556 "model_number": "SPDK bdev Controller", 00:21:24.556 "max_namespaces": 32, 00:21:24.556 "min_cntlid": 1, 00:21:24.556 "max_cntlid": 65519, 00:21:24.556 "ana_reporting": false 00:21:24.556 } 00:21:24.556 }, 00:21:24.556 { 00:21:24.556 "method": "nvmf_subsystem_add_host", 00:21:24.556 "params": { 00:21:24.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.556 "host": "nqn.2016-06.io.spdk:host1", 00:21:24.556 "psk": "key0" 00:21:24.556 } 00:21:24.556 }, 00:21:24.556 { 00:21:24.556 "method": "nvmf_subsystem_add_ns", 00:21:24.556 "params": { 00:21:24.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.556 "namespace": { 00:21:24.556 "nsid": 1, 00:21:24.556 "bdev_name": "malloc0", 00:21:24.556 "nguid": "48F54C8C26B843D0969C833B9B3FC54D", 00:21:24.556 "uuid": "48f54c8c-26b8-43d0-969c-833b9b3fc54d", 00:21:24.556 "no_auto_visible": false 00:21:24.556 } 00:21:24.556 } 00:21:24.556 }, 00:21:24.556 { 00:21:24.556 "method": "nvmf_subsystem_add_listener", 00:21:24.556 "params": { 00:21:24.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.556 "listen_address": { 00:21:24.556 "trtype": "TCP", 00:21:24.556 "adrfam": "IPv4", 00:21:24.556 "traddr": "10.0.0.2", 00:21:24.556 "trsvcid": "4420" 00:21:24.556 }, 00:21:24.556 "secure_channel": false, 00:21:24.556 "sock_impl": "ssl" 00:21:24.556 } 00:21:24.556 } 00:21:24.556 ] 00:21:24.556 } 00:21:24.556 ] 00:21:24.556 }' 00:21:24.556 21:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:24.852 21:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:24.852 "subsystems": [ 00:21:24.852 { 00:21:24.852 "subsystem": "keyring", 00:21:24.852 "config": [ 00:21:24.852 { 00:21:24.852 "method": "keyring_file_add_key", 00:21:24.852 "params": { 00:21:24.852 "name": "key0", 00:21:24.852 "path": "/tmp/tmp.DlrdnXcx52" 00:21:24.852 } 00:21:24.852 } 00:21:24.852 ] 00:21:24.852 }, 00:21:24.852 { 00:21:24.852 "subsystem": "iobuf", 00:21:24.852 "config": [ 00:21:24.852 { 00:21:24.852 "method": "iobuf_set_options", 00:21:24.852 "params": { 00:21:24.852 "small_pool_count": 8192, 00:21:24.852 "large_pool_count": 1024, 00:21:24.852 "small_bufsize": 8192, 00:21:24.852 "large_bufsize": 135168 00:21:24.852 } 00:21:24.852 } 00:21:24.852 ] 00:21:24.852 }, 00:21:24.852 { 00:21:24.852 "subsystem": "sock", 00:21:24.852 "config": [ 00:21:24.852 { 00:21:24.852 "method": "sock_set_default_impl", 00:21:24.852 "params": { 00:21:24.852 "impl_name": "posix" 00:21:24.852 } 00:21:24.852 }, 00:21:24.852 { 00:21:24.852 "method": "sock_impl_set_options", 00:21:24.852 "params": { 00:21:24.852 "impl_name": "ssl", 00:21:24.852 "recv_buf_size": 4096, 00:21:24.852 "send_buf_size": 4096, 00:21:24.852 "enable_recv_pipe": true, 00:21:24.852 "enable_quickack": false, 00:21:24.852 "enable_placement_id": 0, 00:21:24.852 "enable_zerocopy_send_server": true, 00:21:24.852 "enable_zerocopy_send_client": false, 00:21:24.852 "zerocopy_threshold": 0, 00:21:24.852 "tls_version": 0, 00:21:24.852 "enable_ktls": false 00:21:24.852 } 00:21:24.852 }, 00:21:24.852 { 00:21:24.852 "method": "sock_impl_set_options", 00:21:24.852 "params": { 00:21:24.852 "impl_name": "posix", 00:21:24.852 "recv_buf_size": 2097152, 00:21:24.852 "send_buf_size": 2097152, 00:21:24.853 "enable_recv_pipe": true, 00:21:24.853 "enable_quickack": false, 00:21:24.853 "enable_placement_id": 0, 00:21:24.853 "enable_zerocopy_send_server": true, 00:21:24.853 "enable_zerocopy_send_client": false, 00:21:24.853 "zerocopy_threshold": 0, 00:21:24.853 "tls_version": 0, 00:21:24.853 "enable_ktls": false 00:21:24.853 } 00:21:24.853 } 00:21:24.853 ] 00:21:24.853 }, 00:21:24.853 { 00:21:24.853 "subsystem": "vmd", 00:21:24.853 "config": [] 00:21:24.853 }, 00:21:24.853 { 00:21:24.853 "subsystem": "accel", 00:21:24.853 "config": [ 00:21:24.853 { 00:21:24.853 "method": "accel_set_options", 00:21:24.853 "params": { 00:21:24.853 "small_cache_size": 128, 00:21:24.853 "large_cache_size": 16, 00:21:24.853 "task_count": 2048, 00:21:24.853 "sequence_count": 2048, 00:21:24.853 "buf_count": 2048 00:21:24.853 } 00:21:24.853 } 00:21:24.853 ] 00:21:24.853 }, 00:21:24.853 { 00:21:24.853 "subsystem": "bdev", 00:21:24.853 "config": [ 00:21:24.853 { 00:21:24.853 "method": "bdev_set_options", 00:21:24.853 "params": { 00:21:24.853 "bdev_io_pool_size": 65535, 00:21:24.853 "bdev_io_cache_size": 256, 00:21:24.853 "bdev_auto_examine": true, 00:21:24.853 "iobuf_small_cache_size": 128, 00:21:24.853 "iobuf_large_cache_size": 16 00:21:24.853 } 00:21:24.853 }, 00:21:24.853 { 00:21:24.853 "method": "bdev_raid_set_options", 00:21:24.853 "params": { 00:21:24.853 "process_window_size_kb": 1024 00:21:24.853 } 00:21:24.853 }, 00:21:24.853 { 00:21:24.853 "method": "bdev_iscsi_set_options", 00:21:24.853 "params": { 00:21:24.853 "timeout_sec": 30 00:21:24.853 } 00:21:24.853 }, 00:21:24.853 { 00:21:24.853 "method": "bdev_nvme_set_options", 00:21:24.853 "params": { 00:21:24.853 "action_on_timeout": "none", 00:21:24.853 "timeout_us": 0, 00:21:24.853 "timeout_admin_us": 0, 00:21:24.853 "keep_alive_timeout_ms": 10000, 00:21:24.853 "arbitration_burst": 0, 00:21:24.853 "low_priority_weight": 0, 00:21:24.853 "medium_priority_weight": 0, 00:21:24.853 "high_priority_weight": 0, 00:21:24.853 "nvme_adminq_poll_period_us": 10000, 00:21:24.853 "nvme_ioq_poll_period_us": 0, 00:21:24.853 "io_queue_requests": 512, 00:21:24.853 "delay_cmd_submit": true, 00:21:24.853 "transport_retry_count": 4, 00:21:24.853 "bdev_retry_count": 3, 00:21:24.853 "transport_ack_timeout": 0, 00:21:24.853 "ctrlr_loss_timeout_sec": 0, 00:21:24.853 "reconnect_delay_sec": 0, 00:21:24.853 "fast_io_fail_timeout_sec": 0, 00:21:24.853 "disable_auto_failback": false, 00:21:24.853 "generate_uuids": false, 00:21:24.853 "transport_tos": 0, 00:21:24.853 "nvme_error_stat": false, 00:21:24.853 "rdma_srq_size": 0, 00:21:24.853 "io_path_stat": false, 00:21:24.853 "allow_accel_sequence": false, 00:21:24.853 "rdma_max_cq_size": 0, 00:21:24.853 "rdma_cm_event_timeout_ms": 0, 00:21:24.853 "dhchap_digests": [ 00:21:24.853 "sha256", 00:21:24.853 "sha384", 00:21:24.853 "sha512" 00:21:24.853 ], 00:21:24.853 "dhchap_dhgroups": [ 00:21:24.853 "null", 00:21:24.853 "ffdhe2048", 00:21:24.853 "ffdhe3072", 00:21:24.853 "ffdhe4096", 00:21:24.853 "ffdhe6144", 00:21:24.853 "ffdhe8192" 00:21:24.853 ] 00:21:24.853 } 00:21:24.853 }, 00:21:24.853 { 00:21:24.853 "method": "bdev_nvme_attach_controller", 00:21:24.853 "params": { 00:21:24.853 "name": "nvme0", 00:21:24.853 "trtype": "TCP", 00:21:24.853 "adrfam": "IPv4", 00:21:24.853 "traddr": "10.0.0.2", 00:21:24.853 "trsvcid": "4420", 00:21:24.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.853 "prchk_reftag": false, 00:21:24.853 "prchk_guard": false, 00:21:24.853 "ctrlr_loss_timeout_sec": 0, 00:21:24.853 "reconnect_delay_sec": 0, 00:21:24.853 "fast_io_fail_timeout_sec": 0, 00:21:24.853 "psk": "key0", 00:21:24.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:24.853 "hdgst": false, 00:21:24.853 "ddgst": false 00:21:24.853 } 00:21:24.853 }, 00:21:24.853 { 00:21:24.853 "method": "bdev_nvme_set_hotplug", 00:21:24.853 "params": { 00:21:24.853 "period_us": 100000, 00:21:24.853 "enable": false 00:21:24.853 } 00:21:24.853 }, 00:21:24.853 { 00:21:24.853 "method": "bdev_enable_histogram", 00:21:24.853 "params": { 00:21:24.853 "name": "nvme0n1", 00:21:24.853 "enable": true 00:21:24.853 } 00:21:24.853 }, 00:21:24.853 { 00:21:24.853 "method": "bdev_wait_for_examine" 00:21:24.853 } 00:21:24.853 ] 00:21:24.853 }, 00:21:24.853 { 00:21:24.853 "subsystem": "nbd", 00:21:24.853 "config": [] 00:21:24.853 } 00:21:24.853 ] 00:21:24.853 }' 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 2221752 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2221752 ']' 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2221752 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2221752 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2221752' 00:21:24.853 killing process with pid 2221752 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2221752 00:21:24.853 Received shutdown signal, test time was about 1.000000 seconds 00:21:24.853 00:21:24.853 Latency(us) 00:21:24.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.853 =================================================================================================================== 00:21:24.853 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2221752 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 2221559 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2221559 ']' 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2221559 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2221559 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2221559' 00:21:24.853 killing process with pid 2221559 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2221559 00:21:24.853 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2221559 00:21:25.115 21:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:25.115 "subsystems": [ 00:21:25.115 { 00:21:25.115 "subsystem": "keyring", 00:21:25.115 "config": [ 00:21:25.115 { 00:21:25.115 "method": "keyring_file_add_key", 00:21:25.115 "params": { 00:21:25.115 "name": "key0", 00:21:25.115 "path": "/tmp/tmp.DlrdnXcx52" 00:21:25.115 } 00:21:25.115 } 00:21:25.115 ] 00:21:25.115 }, 00:21:25.115 { 00:21:25.115 "subsystem": "iobuf", 00:21:25.115 "config": [ 00:21:25.115 { 00:21:25.115 "method": "iobuf_set_options", 00:21:25.115 "params": { 00:21:25.115 "small_pool_count": 8192, 00:21:25.115 "large_pool_count": 1024, 00:21:25.115 "small_bufsize": 8192, 00:21:25.115 "large_bufsize": 135168 00:21:25.115 } 00:21:25.115 } 00:21:25.115 ] 00:21:25.115 }, 00:21:25.115 { 00:21:25.115 "subsystem": "sock", 00:21:25.115 "config": [ 00:21:25.115 { 00:21:25.115 "method": "sock_set_default_impl", 00:21:25.115 "params": { 00:21:25.115 "impl_name": "posix" 00:21:25.115 } 00:21:25.115 }, 00:21:25.115 { 00:21:25.115 "method": "sock_impl_set_options", 00:21:25.115 "params": { 00:21:25.115 "impl_name": "ssl", 00:21:25.115 "recv_buf_size": 4096, 00:21:25.115 "send_buf_size": 4096, 00:21:25.115 "enable_recv_pipe": true, 00:21:25.115 "enable_quickack": false, 00:21:25.115 "enable_placement_id": 0, 00:21:25.115 "enable_zerocopy_send_server": true, 00:21:25.115 "enable_zerocopy_send_client": false, 00:21:25.115 "zerocopy_threshold": 0, 00:21:25.115 "tls_version": 0, 00:21:25.115 "enable_ktls": false 00:21:25.115 } 00:21:25.115 }, 00:21:25.115 { 00:21:25.115 "method": "sock_impl_set_options", 00:21:25.115 "params": { 00:21:25.115 "impl_name": "posix", 00:21:25.115 "recv_buf_size": 2097152, 00:21:25.115 "send_buf_size": 2097152, 00:21:25.115 "enable_recv_pipe": true, 00:21:25.115 "enable_quickack": false, 00:21:25.115 "enable_placement_id": 0, 00:21:25.115 "enable_zerocopy_send_server": true, 00:21:25.115 "enable_zerocopy_send_client": false, 00:21:25.115 "zerocopy_threshold": 0, 00:21:25.115 "tls_version": 0, 00:21:25.115 "enable_ktls": false 00:21:25.115 } 00:21:25.115 } 00:21:25.115 ] 00:21:25.115 }, 00:21:25.115 { 00:21:25.115 "subsystem": "vmd", 00:21:25.115 "config": [] 00:21:25.115 }, 00:21:25.115 { 00:21:25.115 "subsystem": "accel", 00:21:25.115 "config": [ 00:21:25.115 { 00:21:25.115 "method": "accel_set_options", 00:21:25.115 "params": { 00:21:25.115 "small_cache_size": 128, 00:21:25.115 "large_cache_size": 16, 00:21:25.115 "task_count": 2048, 00:21:25.115 "sequence_count": 2048, 00:21:25.115 "buf_count": 2048 00:21:25.115 } 00:21:25.115 } 00:21:25.115 ] 00:21:25.115 }, 00:21:25.115 { 00:21:25.115 "subsystem": "bdev", 00:21:25.115 "config": [ 00:21:25.115 { 00:21:25.115 "method": "bdev_set_options", 00:21:25.115 "params": { 00:21:25.115 "bdev_io_pool_size": 65535, 00:21:25.115 "bdev_io_cache_size": 256, 00:21:25.115 "bdev_auto_examine": true, 00:21:25.115 "iobuf_small_cache_size": 128, 00:21:25.115 "iobuf_large_cache_size": 16 00:21:25.115 } 00:21:25.115 }, 00:21:25.115 { 00:21:25.115 "method": "bdev_raid_set_options", 00:21:25.115 "params": { 00:21:25.115 "process_window_size_kb": 1024 00:21:25.115 } 00:21:25.115 }, 00:21:25.115 { 00:21:25.115 "method": "bdev_iscsi_set_options", 00:21:25.115 "params": { 00:21:25.115 "timeout_sec": 30 00:21:25.115 } 00:21:25.115 }, 00:21:25.115 { 00:21:25.115 "method": "bdev_nvme_set_options", 00:21:25.115 "params": { 00:21:25.115 "action_on_timeout": "none", 00:21:25.115 "timeout_us": 0, 00:21:25.115 "timeout_admin_us": 0, 00:21:25.115 "keep_alive_timeout_ms": 10000, 00:21:25.115 "arbitration_burst": 0, 00:21:25.115 "low_priority_weight": 0, 00:21:25.115 "medium_priority_weight": 0, 00:21:25.115 "high_priority_weight": 0, 00:21:25.115 "nvme_adminq_poll_period_us": 10000, 00:21:25.115 "nvme_ioq_poll_period_us": 0, 00:21:25.115 "io_queue_requests": 0, 00:21:25.116 "delay_cmd_submit": true, 00:21:25.116 "transport_retry_count": 4, 00:21:25.116 "bdev_retry_count": 3, 00:21:25.116 "transport_ack_timeout": 0, 00:21:25.116 "ctrlr_loss_timeout_sec": 0, 00:21:25.116 "reconnect_delay_sec": 0, 00:21:25.116 "fast_io_fail_timeout_sec": 0, 00:21:25.116 "disable_auto_failback": false, 00:21:25.116 "generate_uuids": false, 00:21:25.116 "transport_tos": 0, 00:21:25.116 "nvme_error_stat": false, 00:21:25.116 "rdma_srq_size": 0, 00:21:25.116 "io_path_stat": false, 00:21:25.116 "allow_accel_sequence": false, 00:21:25.116 "rdma_max_cq_size": 0, 00:21:25.116 "rdma_cm_event_timeout_ms": 0, 00:21:25.116 "dhchap_digests": [ 00:21:25.116 "sha256", 00:21:25.116 "sha384", 00:21:25.116 "sha512" 00:21:25.116 ], 00:21:25.116 "dhchap_dhgroups": [ 00:21:25.116 "null", 00:21:25.116 "ffdhe2048", 00:21:25.116 "ffdhe3072", 00:21:25.116 "ffdhe4096", 00:21:25.116 "ffdhe6144", 00:21:25.116 "ffdhe8192" 00:21:25.116 ] 00:21:25.116 } 00:21:25.116 }, 00:21:25.116 { 00:21:25.116 "method": "bdev_nvme_set_hotplug", 00:21:25.116 "params": { 00:21:25.116 "period_us": 100000, 00:21:25.116 "enable": false 00:21:25.116 } 00:21:25.116 }, 00:21:25.116 { 00:21:25.116 "method": "bdev_malloc_create", 00:21:25.116 "params": { 00:21:25.116 "name": "malloc0", 00:21:25.116 "num_blocks": 8192, 00:21:25.116 "block_size": 4096, 00:21:25.116 "physical_block_size": 4096, 00:21:25.116 "uuid": "48f54c8c-26b8-43d0-969c-833b9b3fc54d", 00:21:25.116 "optimal_io_boundary": 0 00:21:25.116 } 00:21:25.116 }, 00:21:25.116 { 00:21:25.116 "method": "bdev_wait_for_examine" 00:21:25.116 } 00:21:25.116 ] 00:21:25.116 }, 00:21:25.116 { 00:21:25.116 "subsystem": "nbd", 00:21:25.116 "config": [] 00:21:25.116 }, 00:21:25.116 { 00:21:25.116 "subsystem": "scheduler", 00:21:25.116 "config": [ 00:21:25.116 { 00:21:25.116 "method": "framework_set_scheduler", 00:21:25.116 "params": { 00:21:25.116 "name": "static" 00:21:25.116 } 00:21:25.116 } 00:21:25.116 ] 00:21:25.116 }, 00:21:25.116 { 00:21:25.116 "subsystem": "nvmf", 00:21:25.116 "config": [ 00:21:25.116 { 00:21:25.116 "method": "nvmf_set_config", 00:21:25.116 "params": { 00:21:25.116 "discovery_filter": "match_any", 00:21:25.116 "admin_cmd_passthru": { 00:21:25.116 "identify_ctrlr": false 00:21:25.116 } 00:21:25.116 } 00:21:25.116 }, 00:21:25.116 { 00:21:25.116 "method": "nvmf_set_max_subsystems", 00:21:25.116 "params": { 00:21:25.116 "max_subsystems": 1024 00:21:25.116 } 00:21:25.116 }, 00:21:25.116 { 00:21:25.116 "method": "nvmf_set_crdt", 00:21:25.116 "params": { 00:21:25.116 "crdt1": 0, 00:21:25.116 "crdt2": 0, 00:21:25.116 "crdt3": 0 00:21:25.116 } 00:21:25.116 }, 00:21:25.116 { 00:21:25.116 "method": "nvmf_create_transport", 00:21:25.116 "params": { 00:21:25.116 "trtype": "TCP", 00:21:25.116 "max_queue_depth": 128, 00:21:25.116 "max_io_qpairs_per_ctrlr": 127, 00:21:25.116 "in_capsule_data_size": 4096, 00:21:25.116 "max_io_size": 131072, 00:21:25.116 "io_unit_size": 131072, 00:21:25.116 "max_aq_depth": 128, 00:21:25.116 "num_shared_buffers": 511, 00:21:25.116 "buf_cache_size": 4294967295, 00:21:25.116 "dif_insert_or_strip": false, 00:21:25.116 "zcopy": false, 00:21:25.116 "c2h_success": false, 00:21:25.116 "sock_priority": 0, 00:21:25.116 "abort_timeout_sec": 1, 00:21:25.116 "ack_timeout": 0, 00:21:25.116 "data_wr_pool_size": 0 00:21:25.116 } 00:21:25.116 }, 00:21:25.116 { 00:21:25.116 "method": "nvmf_create_subsystem", 00:21:25.116 "params": { 00:21:25.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.116 "allow_any_host": false, 00:21:25.116 "serial_number": "00000000000000000000", 00:21:25.116 "model_number": "SPDK bdev Controller", 00:21:25.116 "max_namespaces": 32, 00:21:25.116 "min_cntlid": 1, 00:21:25.116 "max_cntlid": 65519, 00:21:25.116 "ana_reporting": false 00:21:25.116 } 00:21:25.116 }, 00:21:25.116 { 00:21:25.116 "method": "nvmf_subsystem_add_host", 00:21:25.116 "params": { 00:21:25.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.116 "host": "nqn.2016-06.io.spdk:host1", 00:21:25.116 "psk": "key0" 00:21:25.116 } 00:21:25.116 }, 00:21:25.116 { 00:21:25.116 "method": "nvmf_subsystem_add_ns", 00:21:25.116 "params": { 00:21:25.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.116 "namespace": { 00:21:25.116 "nsid": 1, 00:21:25.116 "bdev_name": "malloc0", 00:21:25.116 "nguid": "48F54C8C26B843D0969C833B9B3FC54D", 00:21:25.116 "uuid": "48f54c8c-26b8-43d0-969c-833b9b3fc54d", 00:21:25.116 "no_auto_visible": false 00:21:25.116 } 00:21:25.116 } 00:21:25.116 }, 00:21:25.116 { 00:21:25.116 "method": "nvmf_subsystem_add_listener", 00:21:25.116 "params": { 00:21:25.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.116 "listen_address": { 00:21:25.116 "trtype": "TCP", 00:21:25.116 "adrfam": "IPv4", 00:21:25.116 "traddr": "10.0.0.2", 00:21:25.116 "trsvcid": "4420" 00:21:25.116 }, 00:21:25.116 "secure_channel": false, 00:21:25.116 "sock_impl": "ssl" 00:21:25.116 } 00:21:25.116 } 00:21:25.116 ] 00:21:25.116 } 00:21:25.116 ] 00:21:25.116 }' 00:21:25.116 21:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:25.116 21:37:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:25.116 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:25.116 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.116 21:37:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2222271 00:21:25.116 21:37:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2222271 00:21:25.116 21:37:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:25.116 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2222271 ']' 00:21:25.116 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.116 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:25.116 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.116 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:25.116 21:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.116 [2024-07-15 21:37:14.825813] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:25.116 [2024-07-15 21:37:14.825869] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.116 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.116 [2024-07-15 21:37:14.890439] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.377 [2024-07-15 21:37:14.955925] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.377 [2024-07-15 21:37:14.955960] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.377 [2024-07-15 21:37:14.955968] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.377 [2024-07-15 21:37:14.955974] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.377 [2024-07-15 21:37:14.955979] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.377 [2024-07-15 21:37:14.956027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.377 [2024-07-15 21:37:15.153728] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.637 [2024-07-15 21:37:15.185739] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:25.637 [2024-07-15 21:37:15.198307] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.899 21:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:25.899 21:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:25.899 21:37:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:25.899 21:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:25.899 21:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.899 21:37:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.899 21:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2222615 00:21:25.899 21:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2222615 /var/tmp/bdevperf.sock 00:21:25.899 21:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2222615 ']' 00:21:25.899 21:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.899 21:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:25.899 21:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.899 21:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:25.899 21:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:25.899 21:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.899 21:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:25.899 "subsystems": [ 00:21:25.899 { 00:21:25.899 "subsystem": "keyring", 00:21:25.899 "config": [ 00:21:25.899 { 00:21:25.899 "method": "keyring_file_add_key", 00:21:25.899 "params": { 00:21:25.899 "name": "key0", 00:21:25.899 "path": "/tmp/tmp.DlrdnXcx52" 00:21:25.899 } 00:21:25.899 } 00:21:25.899 ] 00:21:25.899 }, 00:21:25.899 { 00:21:25.899 "subsystem": "iobuf", 00:21:25.899 "config": [ 00:21:25.899 { 00:21:25.899 "method": "iobuf_set_options", 00:21:25.899 "params": { 00:21:25.899 "small_pool_count": 8192, 00:21:25.899 "large_pool_count": 1024, 00:21:25.899 "small_bufsize": 8192, 00:21:25.899 "large_bufsize": 135168 00:21:25.899 } 00:21:25.899 } 00:21:25.899 ] 00:21:25.899 }, 00:21:25.899 { 00:21:25.899 "subsystem": "sock", 00:21:25.899 "config": [ 00:21:25.899 { 00:21:25.899 "method": "sock_set_default_impl", 00:21:25.899 "params": { 00:21:25.899 "impl_name": "posix" 00:21:25.899 } 00:21:25.899 }, 00:21:25.899 { 00:21:25.899 "method": "sock_impl_set_options", 00:21:25.899 "params": { 00:21:25.899 "impl_name": "ssl", 00:21:25.899 "recv_buf_size": 4096, 00:21:25.899 "send_buf_size": 4096, 00:21:25.899 "enable_recv_pipe": true, 00:21:25.899 "enable_quickack": false, 00:21:25.899 "enable_placement_id": 0, 00:21:25.899 "enable_zerocopy_send_server": true, 00:21:25.899 "enable_zerocopy_send_client": false, 00:21:25.899 "zerocopy_threshold": 0, 00:21:25.899 "tls_version": 0, 00:21:25.899 "enable_ktls": false 00:21:25.899 } 00:21:25.899 }, 00:21:25.899 { 00:21:25.899 "method": "sock_impl_set_options", 00:21:25.899 "params": { 00:21:25.899 "impl_name": "posix", 00:21:25.899 "recv_buf_size": 2097152, 00:21:25.899 "send_buf_size": 2097152, 00:21:25.899 "enable_recv_pipe": true, 00:21:25.899 "enable_quickack": false, 00:21:25.899 "enable_placement_id": 0, 00:21:25.899 "enable_zerocopy_send_server": true, 00:21:25.899 "enable_zerocopy_send_client": false, 00:21:25.899 "zerocopy_threshold": 0, 00:21:25.899 "tls_version": 0, 00:21:25.899 "enable_ktls": false 00:21:25.899 } 00:21:25.899 } 00:21:25.899 ] 00:21:25.899 }, 00:21:25.899 { 00:21:25.899 "subsystem": "vmd", 00:21:25.899 "config": [] 00:21:25.899 }, 00:21:25.899 { 00:21:25.899 "subsystem": "accel", 00:21:25.899 "config": [ 00:21:25.899 { 00:21:25.899 "method": "accel_set_options", 00:21:25.899 "params": { 00:21:25.899 "small_cache_size": 128, 00:21:25.899 "large_cache_size": 16, 00:21:25.899 "task_count": 2048, 00:21:25.899 "sequence_count": 2048, 00:21:25.899 "buf_count": 2048 00:21:25.899 } 00:21:25.899 } 00:21:25.899 ] 00:21:25.899 }, 00:21:25.899 { 00:21:25.899 "subsystem": "bdev", 00:21:25.899 "config": [ 00:21:25.899 { 00:21:25.899 "method": "bdev_set_options", 00:21:25.899 "params": { 00:21:25.899 "bdev_io_pool_size": 65535, 00:21:25.899 "bdev_io_cache_size": 256, 00:21:25.899 "bdev_auto_examine": true, 00:21:25.899 "iobuf_small_cache_size": 128, 00:21:25.899 "iobuf_large_cache_size": 16 00:21:25.899 } 00:21:25.899 }, 00:21:25.899 { 00:21:25.899 "method": "bdev_raid_set_options", 00:21:25.899 "params": { 00:21:25.899 "process_window_size_kb": 1024 00:21:25.899 } 00:21:25.899 }, 00:21:25.899 { 00:21:25.899 "method": "bdev_iscsi_set_options", 00:21:25.899 "params": { 00:21:25.899 "timeout_sec": 30 00:21:25.899 } 00:21:25.899 }, 00:21:25.900 { 00:21:25.900 "method": "bdev_nvme_set_options", 00:21:25.900 "params": { 00:21:25.900 "action_on_timeout": "none", 00:21:25.900 "timeout_us": 0, 00:21:25.900 "timeout_admin_us": 0, 00:21:25.900 "keep_alive_timeout_ms": 10000, 00:21:25.900 "arbitration_burst": 0, 00:21:25.900 "low_priority_weight": 0, 00:21:25.900 "medium_priority_weight": 0, 00:21:25.900 "high_priority_weight": 0, 00:21:25.900 "nvme_adminq_poll_period_us": 10000, 00:21:25.900 "nvme_ioq_poll_period_us": 0, 00:21:25.900 "io_queue_requests": 512, 00:21:25.900 "delay_cmd_submit": true, 00:21:25.900 "transport_retry_count": 4, 00:21:25.900 "bdev_retry_count": 3, 00:21:25.900 "transport_ack_timeout": 0, 00:21:25.900 "ctrlr_loss_timeout_sec": 0, 00:21:25.900 "reconnect_delay_sec": 0, 00:21:25.900 "fast_io_fail_timeout_sec": 0, 00:21:25.900 "disable_auto_failback": false, 00:21:25.900 "generate_uuids": false, 00:21:25.900 "transport_tos": 0, 00:21:25.900 "nvme_error_stat": false, 00:21:25.900 "rdma_srq_size": 0, 00:21:25.900 "io_path_stat": false, 00:21:25.900 "allow_accel_sequence": false, 00:21:25.900 "rdma_max_cq_size": 0, 00:21:25.900 "rdma_cm_event_timeout_ms": 0, 00:21:25.900 "dhchap_digests": [ 00:21:25.900 "sha256", 00:21:25.900 "sha384", 00:21:25.900 "sha512" 00:21:25.900 ], 00:21:25.900 "dhchap_dhgroups": [ 00:21:25.900 "null", 00:21:25.900 "ffdhe2048", 00:21:25.900 "ffdhe3072", 00:21:25.900 "ffdhe4096", 00:21:25.900 "ffdhe6144", 00:21:25.900 "ffdhe8192" 00:21:25.900 ] 00:21:25.900 } 00:21:25.900 }, 00:21:25.900 { 00:21:25.900 "method": "bdev_nvme_attach_controller", 00:21:25.900 "params": { 00:21:25.900 "name": "nvme0", 00:21:25.900 "trtype": "TCP", 00:21:25.900 "adrfam": "IPv4", 00:21:25.900 "traddr": "10.0.0.2", 00:21:25.900 "trsvcid": "4420", 00:21:25.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.900 "prchk_reftag": false, 00:21:25.900 "prchk_guard": false, 00:21:25.900 "ctrlr_loss_timeout_sec": 0, 00:21:25.900 "reconnect_delay_sec": 0, 00:21:25.900 "fast_io_fail_timeout_sec": 0, 00:21:25.900 "psk": "key0", 00:21:25.900 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:25.900 "hdgst": false, 00:21:25.900 "ddgst": false 00:21:25.900 } 00:21:25.900 }, 00:21:25.900 { 00:21:25.900 "method": "bdev_nvme_set_hotplug", 00:21:25.900 "params": { 00:21:25.900 "period_us": 100000, 00:21:25.900 "enable": false 00:21:25.900 } 00:21:25.900 }, 00:21:25.900 { 00:21:25.900 "method": "bdev_enable_histogram", 00:21:25.900 "params": { 00:21:25.900 "name": "nvme0n1", 00:21:25.900 "enable": true 00:21:25.900 } 00:21:25.900 }, 00:21:25.900 { 00:21:25.900 "method": "bdev_wait_for_examine" 00:21:25.900 } 00:21:25.900 ] 00:21:25.900 }, 00:21:25.900 { 00:21:25.900 "subsystem": "nbd", 00:21:25.900 "config": [] 00:21:25.900 } 00:21:25.900 ] 00:21:25.900 }' 00:21:25.900 [2024-07-15 21:37:15.679940] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:25.900 [2024-07-15 21:37:15.679989] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222615 ] 00:21:26.161 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.161 [2024-07-15 21:37:15.754741] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.161 [2024-07-15 21:37:15.808709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.161 [2024-07-15 21:37:15.941954] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:26.732 21:37:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:26.732 21:37:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:26.732 21:37:16 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:26.732 21:37:16 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:21:26.992 21:37:16 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.992 21:37:16 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:26.992 Running I/O for 1 seconds... 00:21:28.375 00:21:28.375 Latency(us) 00:21:28.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.375 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:28.375 Verification LBA range: start 0x0 length 0x2000 00:21:28.375 nvme0n1 : 1.06 2140.99 8.36 0.00 0.00 58302.59 6089.39 103546.88 00:21:28.375 =================================================================================================================== 00:21:28.375 Total : 2140.99 8.36 0.00 0.00 58302.59 6089.39 103546.88 00:21:28.375 0 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:28.375 nvmf_trace.0 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2222615 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2222615 ']' 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2222615 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2222615 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2222615' 00:21:28.375 killing process with pid 2222615 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2222615 00:21:28.375 Received shutdown signal, test time was about 1.000000 seconds 00:21:28.375 00:21:28.375 Latency(us) 00:21:28.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.375 =================================================================================================================== 00:21:28.375 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:28.375 21:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2222615 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:28.375 rmmod nvme_tcp 00:21:28.375 rmmod nvme_fabrics 00:21:28.375 rmmod nvme_keyring 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2222271 ']' 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2222271 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2222271 ']' 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2222271 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2222271 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2222271' 00:21:28.375 killing process with pid 2222271 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2222271 00:21:28.375 21:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2222271 00:21:28.635 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:28.636 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:28.636 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:28.636 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:28.636 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:28.636 21:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.636 21:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.636 21:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.545 21:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:30.545 21:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.1khaWzZplv /tmp/tmp.QTLLGiA2AF /tmp/tmp.DlrdnXcx52 00:21:30.805 00:21:30.805 real 1m24.238s 00:21:30.805 user 2m7.955s 00:21:30.805 sys 0m29.067s 00:21:30.805 21:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:30.805 21:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.805 ************************************ 00:21:30.805 END TEST nvmf_tls 00:21:30.805 ************************************ 00:21:30.805 21:37:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:30.805 21:37:20 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:30.805 21:37:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:30.805 21:37:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:30.805 21:37:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:30.805 ************************************ 00:21:30.805 START TEST nvmf_fips 00:21:30.805 ************************************ 00:21:30.805 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:30.805 * Looking for test storage... 00:21:30.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:30.805 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:30.805 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:30.805 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:30.805 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:30.805 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:30.805 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:30.806 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:31.066 Error setting digest 00:21:31.066 00A25F9EE07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:31.066 00A25F9EE07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:31.066 21:37:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:37.649 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:37.649 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.649 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:37.649 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:37.650 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.650 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.910 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.910 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.910 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:37.910 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.910 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.910 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.910 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:37.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:21:37.910 00:21:37.910 --- 10.0.0.2 ping statistics --- 00:21:37.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.910 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:21:37.910 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:21:38.170 00:21:38.170 --- 10.0.0.1 ping statistics --- 00:21:38.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.170 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2227255 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2227255 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2227255 ']' 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.170 21:37:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:38.170 [2024-07-15 21:37:27.852242] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:38.170 [2024-07-15 21:37:27.852312] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.170 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.170 [2024-07-15 21:37:27.938977] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.431 [2024-07-15 21:37:28.030721] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.431 [2024-07-15 21:37:28.030775] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.431 [2024-07-15 21:37:28.030783] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.431 [2024-07-15 21:37:28.030790] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.431 [2024-07-15 21:37:28.030796] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.431 [2024-07-15 21:37:28.030822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.029 21:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.029 21:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:39.029 21:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:39.029 21:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:39.029 21:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:39.029 21:37:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.029 21:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:39.029 21:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:39.029 21:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.029 21:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:39.029 21:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.029 21:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.029 21:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.029 21:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:39.029 [2024-07-15 21:37:28.818277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.030 [2024-07-15 21:37:28.834272] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.030 [2024-07-15 21:37:28.834501] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.291 [2024-07-15 21:37:28.864356] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:39.291 malloc0 00:21:39.291 21:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:39.291 21:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2227346 00:21:39.291 21:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2227346 /var/tmp/bdevperf.sock 00:21:39.291 21:37:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:39.291 21:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2227346 ']' 00:21:39.291 21:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.291 21:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.291 21:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.291 21:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.291 21:37:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:39.291 [2024-07-15 21:37:28.967903] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:21:39.291 [2024-07-15 21:37:28.967986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227346 ] 00:21:39.291 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.291 [2024-07-15 21:37:29.025254] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.291 [2024-07-15 21:37:29.090748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.233 21:37:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:40.233 21:37:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:40.233 21:37:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:40.233 [2024-07-15 21:37:29.862742] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:40.233 [2024-07-15 21:37:29.862806] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:40.233 TLSTESTn1 00:21:40.233 21:37:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:40.493 Running I/O for 10 seconds... 00:21:50.526 00:21:50.526 Latency(us) 00:21:50.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.526 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:50.526 Verification LBA range: start 0x0 length 0x2000 00:21:50.526 TLSTESTn1 : 10.04 2572.96 10.05 0.00 0.00 49656.87 6444.37 78643.20 00:21:50.526 =================================================================================================================== 00:21:50.526 Total : 2572.96 10.05 0.00 0.00 49656.87 6444.37 78643.20 00:21:50.526 0 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:50.526 nvmf_trace.0 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2227346 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2227346 ']' 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2227346 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2227346 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2227346' 00:21:50.526 killing process with pid 2227346 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2227346 00:21:50.526 Received shutdown signal, test time was about 10.000000 seconds 00:21:50.526 00:21:50.526 Latency(us) 00:21:50.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.526 =================================================================================================================== 00:21:50.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.526 [2024-07-15 21:37:40.283512] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:50.526 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2227346 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:50.815 rmmod nvme_tcp 00:21:50.815 rmmod nvme_fabrics 00:21:50.815 rmmod nvme_keyring 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2227255 ']' 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2227255 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2227255 ']' 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2227255 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2227255 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2227255' 00:21:50.815 killing process with pid 2227255 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2227255 00:21:50.815 [2024-07-15 21:37:40.535153] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:50.815 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2227255 00:21:51.074 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:51.074 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:51.074 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:51.074 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:51.074 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:51.074 21:37:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.074 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.074 21:37:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.984 21:37:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:52.984 21:37:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:52.984 00:21:52.984 real 0m22.303s 00:21:52.984 user 0m22.988s 00:21:52.984 sys 0m10.005s 00:21:52.984 21:37:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:52.984 21:37:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:52.984 ************************************ 00:21:52.984 END TEST nvmf_fips 00:21:52.984 ************************************ 00:21:52.984 21:37:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:52.984 21:37:42 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:52.984 21:37:42 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:52.984 21:37:42 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:52.984 21:37:42 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:52.984 21:37:42 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:52.984 21:37:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:01.122 21:37:49 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.122 21:37:49 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:01.122 21:37:49 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:01.122 21:37:49 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:01.122 21:37:49 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:01.122 21:37:49 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:01.122 21:37:49 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:01.122 21:37:49 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:01.122 21:37:49 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:01.123 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:01.123 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:01.123 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:01.123 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:01.123 21:37:49 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:01.123 21:37:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:01.123 21:37:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:01.123 21:37:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:01.123 ************************************ 00:22:01.123 START TEST nvmf_perf_adq 00:22:01.123 ************************************ 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:01.123 * Looking for test storage... 00:22:01.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:01.123 21:37:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:07.723 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:07.724 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:07.724 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:07.724 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:07.724 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:07.724 21:37:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:08.295 21:37:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:10.206 21:37:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:15.518 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:15.518 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:15.518 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.518 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:15.519 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.519 21:38:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:15.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:22:15.519 00:22:15.519 --- 10.0.0.2 ping statistics --- 00:22:15.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.519 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:22:15.519 00:22:15.519 --- 10.0.0.1 ping statistics --- 00:22:15.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.519 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2239228 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2239228 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2239228 ']' 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.519 21:38:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.519 [2024-07-15 21:38:05.300666] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:22:15.519 [2024-07-15 21:38:05.300714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.780 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.781 [2024-07-15 21:38:05.359895] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.781 [2024-07-15 21:38:05.427868] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.781 [2024-07-15 21:38:05.427903] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.781 [2024-07-15 21:38:05.427911] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.781 [2024-07-15 21:38:05.427917] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.781 [2024-07-15 21:38:05.427923] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.781 [2024-07-15 21:38:05.428059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.781 [2024-07-15 21:38:05.428169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.781 [2024-07-15 21:38:05.428282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.781 [2024-07-15 21:38:05.428283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:16.353 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.353 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:16.353 21:38:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:16.353 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:16.353 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.353 21:38:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.353 21:38:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:16.353 21:38:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:16.353 21:38:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:16.353 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.353 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.353 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.613 21:38:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:16.613 21:38:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:16.613 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.613 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.613 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.613 21:38:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:16.613 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.613 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.613 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.613 21:38:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:16.613 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.613 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.614 [2024-07-15 21:38:06.282170] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.614 Malloc1 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.614 [2024-07-15 21:38:06.341526] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2239433 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:16.614 21:38:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:16.614 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.162 21:38:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:19.162 21:38:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.162 21:38:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.162 21:38:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.162 21:38:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:19.162 "tick_rate": 2400000000, 00:22:19.162 "poll_groups": [ 00:22:19.162 { 00:22:19.162 "name": "nvmf_tgt_poll_group_000", 00:22:19.162 "admin_qpairs": 1, 00:22:19.162 "io_qpairs": 1, 00:22:19.162 "current_admin_qpairs": 1, 00:22:19.162 "current_io_qpairs": 1, 00:22:19.162 "pending_bdev_io": 0, 00:22:19.162 "completed_nvme_io": 20532, 00:22:19.162 "transports": [ 00:22:19.162 { 00:22:19.162 "trtype": "TCP" 00:22:19.162 } 00:22:19.162 ] 00:22:19.162 }, 00:22:19.162 { 00:22:19.162 "name": "nvmf_tgt_poll_group_001", 00:22:19.162 "admin_qpairs": 0, 00:22:19.162 "io_qpairs": 1, 00:22:19.162 "current_admin_qpairs": 0, 00:22:19.162 "current_io_qpairs": 1, 00:22:19.162 "pending_bdev_io": 0, 00:22:19.162 "completed_nvme_io": 27782, 00:22:19.162 "transports": [ 00:22:19.162 { 00:22:19.162 "trtype": "TCP" 00:22:19.162 } 00:22:19.162 ] 00:22:19.162 }, 00:22:19.162 { 00:22:19.162 "name": "nvmf_tgt_poll_group_002", 00:22:19.162 "admin_qpairs": 0, 00:22:19.162 "io_qpairs": 1, 00:22:19.162 "current_admin_qpairs": 0, 00:22:19.162 "current_io_qpairs": 1, 00:22:19.162 "pending_bdev_io": 0, 00:22:19.162 "completed_nvme_io": 20706, 00:22:19.162 "transports": [ 00:22:19.162 { 00:22:19.162 "trtype": "TCP" 00:22:19.162 } 00:22:19.162 ] 00:22:19.162 }, 00:22:19.162 { 00:22:19.162 "name": "nvmf_tgt_poll_group_003", 00:22:19.162 "admin_qpairs": 0, 00:22:19.162 "io_qpairs": 1, 00:22:19.162 "current_admin_qpairs": 0, 00:22:19.162 "current_io_qpairs": 1, 00:22:19.162 "pending_bdev_io": 0, 00:22:19.162 "completed_nvme_io": 19981, 00:22:19.162 "transports": [ 00:22:19.162 { 00:22:19.162 "trtype": "TCP" 00:22:19.162 } 00:22:19.162 ] 00:22:19.162 } 00:22:19.162 ] 00:22:19.162 }' 00:22:19.162 21:38:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:19.162 21:38:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:19.162 21:38:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:19.162 21:38:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:19.162 21:38:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2239433 00:22:27.308 Initializing NVMe Controllers 00:22:27.308 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:27.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:27.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:27.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:27.308 Initialization complete. Launching workers. 00:22:27.308 ======================================================== 00:22:27.308 Latency(us) 00:22:27.308 Device Information : IOPS MiB/s Average min max 00:22:27.308 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11166.10 43.62 5732.39 1671.55 9947.87 00:22:27.308 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14294.20 55.84 4477.47 1625.39 10499.14 00:22:27.308 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13768.40 53.78 4648.68 1007.64 11097.02 00:22:27.308 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14406.80 56.28 4442.24 969.37 10379.42 00:22:27.308 ======================================================== 00:22:27.308 Total : 53635.50 209.51 4773.21 969.37 11097.02 00:22:27.308 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:27.308 rmmod nvme_tcp 00:22:27.308 rmmod nvme_fabrics 00:22:27.308 rmmod nvme_keyring 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2239228 ']' 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2239228 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2239228 ']' 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2239228 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2239228 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2239228' 00:22:27.308 killing process with pid 2239228 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2239228 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2239228 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:27.308 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:27.309 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:27.309 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:27.309 21:38:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.309 21:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.309 21:38:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.222 21:38:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:29.222 21:38:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:29.222 21:38:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:31.137 21:38:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:33.046 21:38:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:38.330 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:38.331 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:38.331 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:38.331 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:38.331 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:22:38.331 00:22:38.331 --- 10.0.0.2 ping statistics --- 00:22:38.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.331 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:22:38.331 00:22:38.331 --- 10.0.0.1 ping statistics --- 00:22:38.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.331 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:38.331 net.core.busy_poll = 1 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:38.331 net.core.busy_read = 1 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:38.331 21:38:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:38.331 21:38:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:38.331 21:38:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.331 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.331 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.331 21:38:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2244051 00:22:38.331 21:38:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2244051 00:22:38.331 21:38:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:38.331 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2244051 ']' 00:22:38.331 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.331 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.331 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.331 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.331 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.331 [2024-07-15 21:38:28.075649] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:22:38.332 [2024-07-15 21:38:28.075704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.332 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.593 [2024-07-15 21:38:28.142017] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.593 [2024-07-15 21:38:28.212692] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.593 [2024-07-15 21:38:28.212727] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.593 [2024-07-15 21:38:28.212735] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.593 [2024-07-15 21:38:28.212741] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.593 [2024-07-15 21:38:28.212747] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.593 [2024-07-15 21:38:28.212888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.593 [2024-07-15 21:38:28.213001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.593 [2024-07-15 21:38:28.213173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.593 [2024-07-15 21:38:28.213174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.165 21:38:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.426 21:38:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.426 21:38:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:39.426 21:38:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.426 21:38:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.426 [2024-07-15 21:38:29.021445] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.426 21:38:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.426 21:38:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:39.426 21:38:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.426 21:38:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.426 Malloc1 00:22:39.426 21:38:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.426 21:38:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:39.426 21:38:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.426 21:38:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.426 21:38:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.426 21:38:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:39.427 21:38:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.427 21:38:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.427 21:38:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.427 21:38:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.427 21:38:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.427 21:38:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.427 [2024-07-15 21:38:29.080875] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.427 21:38:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.427 21:38:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2244401 00:22:39.427 21:38:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:39.427 21:38:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:39.427 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.469 21:38:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:41.469 21:38:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.469 21:38:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.469 21:38:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.469 21:38:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:41.469 "tick_rate": 2400000000, 00:22:41.469 "poll_groups": [ 00:22:41.469 { 00:22:41.469 "name": "nvmf_tgt_poll_group_000", 00:22:41.469 "admin_qpairs": 1, 00:22:41.469 "io_qpairs": 1, 00:22:41.469 "current_admin_qpairs": 1, 00:22:41.469 "current_io_qpairs": 1, 00:22:41.469 "pending_bdev_io": 0, 00:22:41.469 "completed_nvme_io": 26544, 00:22:41.469 "transports": [ 00:22:41.469 { 00:22:41.469 "trtype": "TCP" 00:22:41.469 } 00:22:41.469 ] 00:22:41.469 }, 00:22:41.469 { 00:22:41.469 "name": "nvmf_tgt_poll_group_001", 00:22:41.469 "admin_qpairs": 0, 00:22:41.469 "io_qpairs": 3, 00:22:41.469 "current_admin_qpairs": 0, 00:22:41.469 "current_io_qpairs": 3, 00:22:41.469 "pending_bdev_io": 0, 00:22:41.469 "completed_nvme_io": 41206, 00:22:41.469 "transports": [ 00:22:41.469 { 00:22:41.469 "trtype": "TCP" 00:22:41.469 } 00:22:41.469 ] 00:22:41.469 }, 00:22:41.469 { 00:22:41.469 "name": "nvmf_tgt_poll_group_002", 00:22:41.469 "admin_qpairs": 0, 00:22:41.469 "io_qpairs": 0, 00:22:41.469 "current_admin_qpairs": 0, 00:22:41.469 "current_io_qpairs": 0, 00:22:41.469 "pending_bdev_io": 0, 00:22:41.469 "completed_nvme_io": 0, 00:22:41.469 "transports": [ 00:22:41.469 { 00:22:41.469 "trtype": "TCP" 00:22:41.469 } 00:22:41.469 ] 00:22:41.469 }, 00:22:41.469 { 00:22:41.469 "name": "nvmf_tgt_poll_group_003", 00:22:41.469 "admin_qpairs": 0, 00:22:41.469 "io_qpairs": 0, 00:22:41.469 "current_admin_qpairs": 0, 00:22:41.469 "current_io_qpairs": 0, 00:22:41.469 "pending_bdev_io": 0, 00:22:41.469 "completed_nvme_io": 0, 00:22:41.469 "transports": [ 00:22:41.469 { 00:22:41.469 "trtype": "TCP" 00:22:41.469 } 00:22:41.469 ] 00:22:41.469 } 00:22:41.469 ] 00:22:41.469 }' 00:22:41.469 21:38:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:41.469 21:38:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:41.469 21:38:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:41.469 21:38:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:41.469 21:38:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2244401 00:22:49.608 Initializing NVMe Controllers 00:22:49.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:49.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:49.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:49.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:49.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:49.608 Initialization complete. Launching workers. 00:22:49.608 ======================================================== 00:22:49.608 Latency(us) 00:22:49.608 Device Information : IOPS MiB/s Average min max 00:22:49.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 17062.60 66.65 3763.12 1209.87 45634.78 00:22:49.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8392.10 32.78 7629.39 1229.28 52277.27 00:22:49.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7993.10 31.22 8024.17 1315.68 55665.01 00:22:49.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5216.30 20.38 12269.66 1811.64 55868.66 00:22:49.608 ======================================================== 00:22:49.608 Total : 38664.09 151.03 6630.84 1209.87 55868.66 00:22:49.608 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:49.608 rmmod nvme_tcp 00:22:49.608 rmmod nvme_fabrics 00:22:49.608 rmmod nvme_keyring 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2244051 ']' 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2244051 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2244051 ']' 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2244051 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2244051 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2244051' 00:22:49.608 killing process with pid 2244051 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2244051 00:22:49.608 21:38:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2244051 00:22:49.869 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:49.869 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:49.869 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:49.869 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:49.869 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:49.869 21:38:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.869 21:38:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.869 21:38:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.166 21:38:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.166 21:38:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:53.166 00:22:53.166 real 0m53.140s 00:22:53.166 user 2m49.847s 00:22:53.166 sys 0m10.378s 00:22:53.166 21:38:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:53.166 21:38:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.166 ************************************ 00:22:53.166 END TEST nvmf_perf_adq 00:22:53.166 ************************************ 00:22:53.166 21:38:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:53.166 21:38:42 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:53.166 21:38:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:53.166 21:38:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.166 21:38:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:53.166 ************************************ 00:22:53.166 START TEST nvmf_shutdown 00:22:53.166 ************************************ 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:53.166 * Looking for test storage... 00:22:53.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.166 21:38:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:53.167 ************************************ 00:22:53.167 START TEST nvmf_shutdown_tc1 00:22:53.167 ************************************ 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.167 21:38:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:01.313 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:01.313 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:01.313 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:01.313 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:01.313 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:01.314 21:38:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:23:01.314 00:23:01.314 --- 10.0.0.2 ping statistics --- 00:23:01.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.314 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:23:01.314 00:23:01.314 --- 10.0.0.1 ping statistics --- 00:23:01.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.314 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2250755 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2250755 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2250755 ']' 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:01.314 [2024-07-15 21:38:50.176343] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:01.314 [2024-07-15 21:38:50.176398] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.314 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.314 [2024-07-15 21:38:50.257892] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:01.314 [2024-07-15 21:38:50.324296] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.314 [2024-07-15 21:38:50.324336] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.314 [2024-07-15 21:38:50.324343] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.314 [2024-07-15 21:38:50.324350] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.314 [2024-07-15 21:38:50.324355] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.314 [2024-07-15 21:38:50.324461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.314 [2024-07-15 21:38:50.324613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.314 [2024-07-15 21:38:50.324773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.314 [2024-07-15 21:38:50.324774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.314 [2024-07-15 21:38:50.978697] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.314 21:38:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.314 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.314 Malloc1 00:23:01.314 [2024-07-15 21:38:51.082077] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.314 Malloc2 00:23:01.576 Malloc3 00:23:01.576 Malloc4 00:23:01.576 Malloc5 00:23:01.576 Malloc6 00:23:01.576 Malloc7 00:23:01.576 Malloc8 00:23:01.576 Malloc9 00:23:01.837 Malloc10 00:23:01.837 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.837 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:01.837 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:01.837 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.837 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2250957 00:23:01.837 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2250957 /var/tmp/bdevperf.sock 00:23:01.837 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2250957 ']' 00:23:01.837 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.837 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.837 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:01.837 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.837 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:01.837 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.837 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.838 { 00:23:01.838 "params": { 00:23:01.838 "name": "Nvme$subsystem", 00:23:01.838 "trtype": "$TEST_TRANSPORT", 00:23:01.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.838 "adrfam": "ipv4", 00:23:01.838 "trsvcid": "$NVMF_PORT", 00:23:01.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.838 "hdgst": ${hdgst:-false}, 00:23:01.838 "ddgst": ${ddgst:-false} 00:23:01.838 }, 00:23:01.838 "method": "bdev_nvme_attach_controller" 00:23:01.838 } 00:23:01.838 EOF 00:23:01.838 )") 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.838 { 00:23:01.838 "params": { 00:23:01.838 "name": "Nvme$subsystem", 00:23:01.838 "trtype": "$TEST_TRANSPORT", 00:23:01.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.838 "adrfam": "ipv4", 00:23:01.838 "trsvcid": "$NVMF_PORT", 00:23:01.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.838 "hdgst": ${hdgst:-false}, 00:23:01.838 "ddgst": ${ddgst:-false} 00:23:01.838 }, 00:23:01.838 "method": "bdev_nvme_attach_controller" 00:23:01.838 } 00:23:01.838 EOF 00:23:01.838 )") 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.838 { 00:23:01.838 "params": { 00:23:01.838 "name": "Nvme$subsystem", 00:23:01.838 "trtype": "$TEST_TRANSPORT", 00:23:01.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.838 "adrfam": "ipv4", 00:23:01.838 "trsvcid": "$NVMF_PORT", 00:23:01.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.838 "hdgst": ${hdgst:-false}, 00:23:01.838 "ddgst": ${ddgst:-false} 00:23:01.838 }, 00:23:01.838 "method": "bdev_nvme_attach_controller" 00:23:01.838 } 00:23:01.838 EOF 00:23:01.838 )") 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.838 { 00:23:01.838 "params": { 00:23:01.838 "name": "Nvme$subsystem", 00:23:01.838 "trtype": "$TEST_TRANSPORT", 00:23:01.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.838 "adrfam": "ipv4", 00:23:01.838 "trsvcid": "$NVMF_PORT", 00:23:01.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.838 "hdgst": ${hdgst:-false}, 00:23:01.838 "ddgst": ${ddgst:-false} 00:23:01.838 }, 00:23:01.838 "method": "bdev_nvme_attach_controller" 00:23:01.838 } 00:23:01.838 EOF 00:23:01.838 )") 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.838 { 00:23:01.838 "params": { 00:23:01.838 "name": "Nvme$subsystem", 00:23:01.838 "trtype": "$TEST_TRANSPORT", 00:23:01.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.838 "adrfam": "ipv4", 00:23:01.838 "trsvcid": "$NVMF_PORT", 00:23:01.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.838 "hdgst": ${hdgst:-false}, 00:23:01.838 "ddgst": ${ddgst:-false} 00:23:01.838 }, 00:23:01.838 "method": "bdev_nvme_attach_controller" 00:23:01.838 } 00:23:01.838 EOF 00:23:01.838 )") 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.838 { 00:23:01.838 "params": { 00:23:01.838 "name": "Nvme$subsystem", 00:23:01.838 "trtype": "$TEST_TRANSPORT", 00:23:01.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.838 "adrfam": "ipv4", 00:23:01.838 "trsvcid": "$NVMF_PORT", 00:23:01.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.838 "hdgst": ${hdgst:-false}, 00:23:01.838 "ddgst": ${ddgst:-false} 00:23:01.838 }, 00:23:01.838 "method": "bdev_nvme_attach_controller" 00:23:01.838 } 00:23:01.838 EOF 00:23:01.838 )") 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.838 [2024-07-15 21:38:51.526524] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:01.838 [2024-07-15 21:38:51.526577] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.838 { 00:23:01.838 "params": { 00:23:01.838 "name": "Nvme$subsystem", 00:23:01.838 "trtype": "$TEST_TRANSPORT", 00:23:01.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.838 "adrfam": "ipv4", 00:23:01.838 "trsvcid": "$NVMF_PORT", 00:23:01.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.838 "hdgst": ${hdgst:-false}, 00:23:01.838 "ddgst": ${ddgst:-false} 00:23:01.838 }, 00:23:01.838 "method": "bdev_nvme_attach_controller" 00:23:01.838 } 00:23:01.838 EOF 00:23:01.838 )") 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.838 { 00:23:01.838 "params": { 00:23:01.838 "name": "Nvme$subsystem", 00:23:01.838 "trtype": "$TEST_TRANSPORT", 00:23:01.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.838 "adrfam": "ipv4", 00:23:01.838 "trsvcid": "$NVMF_PORT", 00:23:01.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.838 "hdgst": ${hdgst:-false}, 00:23:01.838 "ddgst": ${ddgst:-false} 00:23:01.838 }, 00:23:01.838 "method": "bdev_nvme_attach_controller" 00:23:01.838 } 00:23:01.838 EOF 00:23:01.838 )") 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.838 { 00:23:01.838 "params": { 00:23:01.838 "name": "Nvme$subsystem", 00:23:01.838 "trtype": "$TEST_TRANSPORT", 00:23:01.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.838 "adrfam": "ipv4", 00:23:01.838 "trsvcid": "$NVMF_PORT", 00:23:01.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.838 "hdgst": ${hdgst:-false}, 00:23:01.838 "ddgst": ${ddgst:-false} 00:23:01.838 }, 00:23:01.838 "method": "bdev_nvme_attach_controller" 00:23:01.838 } 00:23:01.838 EOF 00:23:01.838 )") 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.838 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.838 { 00:23:01.838 "params": { 00:23:01.838 "name": "Nvme$subsystem", 00:23:01.838 "trtype": "$TEST_TRANSPORT", 00:23:01.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.838 "adrfam": "ipv4", 00:23:01.838 "trsvcid": "$NVMF_PORT", 00:23:01.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.838 "hdgst": ${hdgst:-false}, 00:23:01.838 "ddgst": ${ddgst:-false} 00:23:01.838 }, 00:23:01.838 "method": "bdev_nvme_attach_controller" 00:23:01.838 } 00:23:01.838 EOF 00:23:01.838 )") 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:01.838 21:38:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:01.838 "params": { 00:23:01.838 "name": "Nvme1", 00:23:01.838 "trtype": "tcp", 00:23:01.838 "traddr": "10.0.0.2", 00:23:01.838 "adrfam": "ipv4", 00:23:01.838 "trsvcid": "4420", 00:23:01.838 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.838 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.838 "hdgst": false, 00:23:01.838 "ddgst": false 00:23:01.838 }, 00:23:01.838 "method": "bdev_nvme_attach_controller" 00:23:01.838 },{ 00:23:01.838 "params": { 00:23:01.838 "name": "Nvme2", 00:23:01.838 "trtype": "tcp", 00:23:01.838 "traddr": "10.0.0.2", 00:23:01.838 "adrfam": "ipv4", 00:23:01.838 "trsvcid": "4420", 00:23:01.838 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:01.839 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:01.839 "hdgst": false, 00:23:01.839 "ddgst": false 00:23:01.839 }, 00:23:01.839 "method": "bdev_nvme_attach_controller" 00:23:01.839 },{ 00:23:01.839 "params": { 00:23:01.839 "name": "Nvme3", 00:23:01.839 "trtype": "tcp", 00:23:01.839 "traddr": "10.0.0.2", 00:23:01.839 "adrfam": "ipv4", 00:23:01.839 "trsvcid": "4420", 00:23:01.839 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:01.839 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:01.839 "hdgst": false, 00:23:01.839 "ddgst": false 00:23:01.839 }, 00:23:01.839 "method": "bdev_nvme_attach_controller" 00:23:01.839 },{ 00:23:01.839 "params": { 00:23:01.839 "name": "Nvme4", 00:23:01.839 "trtype": "tcp", 00:23:01.839 "traddr": "10.0.0.2", 00:23:01.839 "adrfam": "ipv4", 00:23:01.839 "trsvcid": "4420", 00:23:01.839 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:01.839 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:01.839 "hdgst": false, 00:23:01.839 "ddgst": false 00:23:01.839 }, 00:23:01.839 "method": "bdev_nvme_attach_controller" 00:23:01.839 },{ 00:23:01.839 "params": { 00:23:01.839 "name": "Nvme5", 00:23:01.839 "trtype": "tcp", 00:23:01.839 "traddr": "10.0.0.2", 00:23:01.839 "adrfam": "ipv4", 00:23:01.839 "trsvcid": "4420", 00:23:01.839 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:01.839 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:01.839 "hdgst": false, 00:23:01.839 "ddgst": false 00:23:01.839 }, 00:23:01.839 "method": "bdev_nvme_attach_controller" 00:23:01.839 },{ 00:23:01.839 "params": { 00:23:01.839 "name": "Nvme6", 00:23:01.839 "trtype": "tcp", 00:23:01.839 "traddr": "10.0.0.2", 00:23:01.839 "adrfam": "ipv4", 00:23:01.839 "trsvcid": "4420", 00:23:01.839 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:01.839 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:01.839 "hdgst": false, 00:23:01.839 "ddgst": false 00:23:01.839 }, 00:23:01.839 "method": "bdev_nvme_attach_controller" 00:23:01.839 },{ 00:23:01.839 "params": { 00:23:01.839 "name": "Nvme7", 00:23:01.839 "trtype": "tcp", 00:23:01.839 "traddr": "10.0.0.2", 00:23:01.839 "adrfam": "ipv4", 00:23:01.839 "trsvcid": "4420", 00:23:01.839 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:01.839 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:01.839 "hdgst": false, 00:23:01.839 "ddgst": false 00:23:01.839 }, 00:23:01.839 "method": "bdev_nvme_attach_controller" 00:23:01.839 },{ 00:23:01.839 "params": { 00:23:01.839 "name": "Nvme8", 00:23:01.839 "trtype": "tcp", 00:23:01.839 "traddr": "10.0.0.2", 00:23:01.839 "adrfam": "ipv4", 00:23:01.839 "trsvcid": "4420", 00:23:01.839 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:01.839 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:01.839 "hdgst": false, 00:23:01.839 "ddgst": false 00:23:01.839 }, 00:23:01.839 "method": "bdev_nvme_attach_controller" 00:23:01.839 },{ 00:23:01.839 "params": { 00:23:01.839 "name": "Nvme9", 00:23:01.839 "trtype": "tcp", 00:23:01.839 "traddr": "10.0.0.2", 00:23:01.839 "adrfam": "ipv4", 00:23:01.839 "trsvcid": "4420", 00:23:01.839 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:01.839 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:01.839 "hdgst": false, 00:23:01.839 "ddgst": false 00:23:01.839 }, 00:23:01.839 "method": "bdev_nvme_attach_controller" 00:23:01.839 },{ 00:23:01.839 "params": { 00:23:01.839 "name": "Nvme10", 00:23:01.839 "trtype": "tcp", 00:23:01.839 "traddr": "10.0.0.2", 00:23:01.839 "adrfam": "ipv4", 00:23:01.839 "trsvcid": "4420", 00:23:01.839 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:01.839 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:01.839 "hdgst": false, 00:23:01.839 "ddgst": false 00:23:01.839 }, 00:23:01.839 "method": "bdev_nvme_attach_controller" 00:23:01.839 }' 00:23:01.839 [2024-07-15 21:38:51.587211] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.100 [2024-07-15 21:38:51.652134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.485 21:38:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.485 21:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:03.485 21:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:03.485 21:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.485 21:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:03.485 21:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.485 21:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2250957 00:23:03.485 21:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:03.485 21:38:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:04.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2250957 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:04.427 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2250755 00:23:04.427 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:04.427 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:04.427 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:04.427 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:04.427 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.427 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.427 { 00:23:04.427 "params": { 00:23:04.427 "name": "Nvme$subsystem", 00:23:04.427 "trtype": "$TEST_TRANSPORT", 00:23:04.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.427 "adrfam": "ipv4", 00:23:04.427 "trsvcid": "$NVMF_PORT", 00:23:04.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.427 "hdgst": ${hdgst:-false}, 00:23:04.427 "ddgst": ${ddgst:-false} 00:23:04.427 }, 00:23:04.427 "method": "bdev_nvme_attach_controller" 00:23:04.427 } 00:23:04.427 EOF 00:23:04.427 )") 00:23:04.427 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.427 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.427 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.427 { 00:23:04.427 "params": { 00:23:04.427 "name": "Nvme$subsystem", 00:23:04.427 "trtype": "$TEST_TRANSPORT", 00:23:04.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.427 "adrfam": "ipv4", 00:23:04.427 "trsvcid": "$NVMF_PORT", 00:23:04.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.427 "hdgst": ${hdgst:-false}, 00:23:04.427 "ddgst": ${ddgst:-false} 00:23:04.427 }, 00:23:04.427 "method": "bdev_nvme_attach_controller" 00:23:04.427 } 00:23:04.427 EOF 00:23:04.427 )") 00:23:04.427 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.427 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.427 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.427 { 00:23:04.427 "params": { 00:23:04.427 "name": "Nvme$subsystem", 00:23:04.427 "trtype": "$TEST_TRANSPORT", 00:23:04.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.427 "adrfam": "ipv4", 00:23:04.427 "trsvcid": "$NVMF_PORT", 00:23:04.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.427 "hdgst": ${hdgst:-false}, 00:23:04.427 "ddgst": ${ddgst:-false} 00:23:04.427 }, 00:23:04.427 "method": "bdev_nvme_attach_controller" 00:23:04.427 } 00:23:04.427 EOF 00:23:04.427 )") 00:23:04.427 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.427 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.428 { 00:23:04.428 "params": { 00:23:04.428 "name": "Nvme$subsystem", 00:23:04.428 "trtype": "$TEST_TRANSPORT", 00:23:04.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.428 "adrfam": "ipv4", 00:23:04.428 "trsvcid": "$NVMF_PORT", 00:23:04.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.428 "hdgst": ${hdgst:-false}, 00:23:04.428 "ddgst": ${ddgst:-false} 00:23:04.428 }, 00:23:04.428 "method": "bdev_nvme_attach_controller" 00:23:04.428 } 00:23:04.428 EOF 00:23:04.428 )") 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.428 { 00:23:04.428 "params": { 00:23:04.428 "name": "Nvme$subsystem", 00:23:04.428 "trtype": "$TEST_TRANSPORT", 00:23:04.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.428 "adrfam": "ipv4", 00:23:04.428 "trsvcid": "$NVMF_PORT", 00:23:04.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.428 "hdgst": ${hdgst:-false}, 00:23:04.428 "ddgst": ${ddgst:-false} 00:23:04.428 }, 00:23:04.428 "method": "bdev_nvme_attach_controller" 00:23:04.428 } 00:23:04.428 EOF 00:23:04.428 )") 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.428 { 00:23:04.428 "params": { 00:23:04.428 "name": "Nvme$subsystem", 00:23:04.428 "trtype": "$TEST_TRANSPORT", 00:23:04.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.428 "adrfam": "ipv4", 00:23:04.428 "trsvcid": "$NVMF_PORT", 00:23:04.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.428 "hdgst": ${hdgst:-false}, 00:23:04.428 "ddgst": ${ddgst:-false} 00:23:04.428 }, 00:23:04.428 "method": "bdev_nvme_attach_controller" 00:23:04.428 } 00:23:04.428 EOF 00:23:04.428 )") 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.428 [2024-07-15 21:38:54.064230] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:04.428 [2024-07-15 21:38:54.064287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251609 ] 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.428 { 00:23:04.428 "params": { 00:23:04.428 "name": "Nvme$subsystem", 00:23:04.428 "trtype": "$TEST_TRANSPORT", 00:23:04.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.428 "adrfam": "ipv4", 00:23:04.428 "trsvcid": "$NVMF_PORT", 00:23:04.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.428 "hdgst": ${hdgst:-false}, 00:23:04.428 "ddgst": ${ddgst:-false} 00:23:04.428 }, 00:23:04.428 "method": "bdev_nvme_attach_controller" 00:23:04.428 } 00:23:04.428 EOF 00:23:04.428 )") 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.428 { 00:23:04.428 "params": { 00:23:04.428 "name": "Nvme$subsystem", 00:23:04.428 "trtype": "$TEST_TRANSPORT", 00:23:04.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.428 "adrfam": "ipv4", 00:23:04.428 "trsvcid": "$NVMF_PORT", 00:23:04.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.428 "hdgst": ${hdgst:-false}, 00:23:04.428 "ddgst": ${ddgst:-false} 00:23:04.428 }, 00:23:04.428 "method": "bdev_nvme_attach_controller" 00:23:04.428 } 00:23:04.428 EOF 00:23:04.428 )") 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.428 { 00:23:04.428 "params": { 00:23:04.428 "name": "Nvme$subsystem", 00:23:04.428 "trtype": "$TEST_TRANSPORT", 00:23:04.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.428 "adrfam": "ipv4", 00:23:04.428 "trsvcid": "$NVMF_PORT", 00:23:04.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.428 "hdgst": ${hdgst:-false}, 00:23:04.428 "ddgst": ${ddgst:-false} 00:23:04.428 }, 00:23:04.428 "method": "bdev_nvme_attach_controller" 00:23:04.428 } 00:23:04.428 EOF 00:23:04.428 )") 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:04.428 { 00:23:04.428 "params": { 00:23:04.428 "name": "Nvme$subsystem", 00:23:04.428 "trtype": "$TEST_TRANSPORT", 00:23:04.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:04.428 "adrfam": "ipv4", 00:23:04.428 "trsvcid": "$NVMF_PORT", 00:23:04.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:04.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:04.428 "hdgst": ${hdgst:-false}, 00:23:04.428 "ddgst": ${ddgst:-false} 00:23:04.428 }, 00:23:04.428 "method": "bdev_nvme_attach_controller" 00:23:04.428 } 00:23:04.428 EOF 00:23:04.428 )") 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:04.428 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:04.428 21:38:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:04.428 "params": { 00:23:04.428 "name": "Nvme1", 00:23:04.428 "trtype": "tcp", 00:23:04.428 "traddr": "10.0.0.2", 00:23:04.428 "adrfam": "ipv4", 00:23:04.428 "trsvcid": "4420", 00:23:04.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:04.428 "hdgst": false, 00:23:04.428 "ddgst": false 00:23:04.428 }, 00:23:04.428 "method": "bdev_nvme_attach_controller" 00:23:04.428 },{ 00:23:04.428 "params": { 00:23:04.428 "name": "Nvme2", 00:23:04.428 "trtype": "tcp", 00:23:04.428 "traddr": "10.0.0.2", 00:23:04.428 "adrfam": "ipv4", 00:23:04.428 "trsvcid": "4420", 00:23:04.428 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:04.428 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:04.428 "hdgst": false, 00:23:04.428 "ddgst": false 00:23:04.428 }, 00:23:04.428 "method": "bdev_nvme_attach_controller" 00:23:04.428 },{ 00:23:04.428 "params": { 00:23:04.428 "name": "Nvme3", 00:23:04.428 "trtype": "tcp", 00:23:04.428 "traddr": "10.0.0.2", 00:23:04.428 "adrfam": "ipv4", 00:23:04.428 "trsvcid": "4420", 00:23:04.428 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:04.428 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:04.428 "hdgst": false, 00:23:04.428 "ddgst": false 00:23:04.428 }, 00:23:04.428 "method": "bdev_nvme_attach_controller" 00:23:04.428 },{ 00:23:04.428 "params": { 00:23:04.428 "name": "Nvme4", 00:23:04.428 "trtype": "tcp", 00:23:04.428 "traddr": "10.0.0.2", 00:23:04.428 "adrfam": "ipv4", 00:23:04.428 "trsvcid": "4420", 00:23:04.428 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:04.428 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:04.428 "hdgst": false, 00:23:04.428 "ddgst": false 00:23:04.428 }, 00:23:04.428 "method": "bdev_nvme_attach_controller" 00:23:04.428 },{ 00:23:04.428 "params": { 00:23:04.429 "name": "Nvme5", 00:23:04.429 "trtype": "tcp", 00:23:04.429 "traddr": "10.0.0.2", 00:23:04.429 "adrfam": "ipv4", 00:23:04.429 "trsvcid": "4420", 00:23:04.429 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:04.429 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:04.429 "hdgst": false, 00:23:04.429 "ddgst": false 00:23:04.429 }, 00:23:04.429 "method": "bdev_nvme_attach_controller" 00:23:04.429 },{ 00:23:04.429 "params": { 00:23:04.429 "name": "Nvme6", 00:23:04.429 "trtype": "tcp", 00:23:04.429 "traddr": "10.0.0.2", 00:23:04.429 "adrfam": "ipv4", 00:23:04.429 "trsvcid": "4420", 00:23:04.429 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:04.429 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:04.429 "hdgst": false, 00:23:04.429 "ddgst": false 00:23:04.429 }, 00:23:04.429 "method": "bdev_nvme_attach_controller" 00:23:04.429 },{ 00:23:04.429 "params": { 00:23:04.429 "name": "Nvme7", 00:23:04.429 "trtype": "tcp", 00:23:04.429 "traddr": "10.0.0.2", 00:23:04.429 "adrfam": "ipv4", 00:23:04.429 "trsvcid": "4420", 00:23:04.429 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:04.429 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:04.429 "hdgst": false, 00:23:04.429 "ddgst": false 00:23:04.429 }, 00:23:04.429 "method": "bdev_nvme_attach_controller" 00:23:04.429 },{ 00:23:04.429 "params": { 00:23:04.429 "name": "Nvme8", 00:23:04.429 "trtype": "tcp", 00:23:04.429 "traddr": "10.0.0.2", 00:23:04.429 "adrfam": "ipv4", 00:23:04.429 "trsvcid": "4420", 00:23:04.429 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:04.429 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:04.429 "hdgst": false, 00:23:04.429 "ddgst": false 00:23:04.429 }, 00:23:04.429 "method": "bdev_nvme_attach_controller" 00:23:04.429 },{ 00:23:04.429 "params": { 00:23:04.429 "name": "Nvme9", 00:23:04.429 "trtype": "tcp", 00:23:04.429 "traddr": "10.0.0.2", 00:23:04.429 "adrfam": "ipv4", 00:23:04.429 "trsvcid": "4420", 00:23:04.429 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:04.429 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:04.429 "hdgst": false, 00:23:04.429 "ddgst": false 00:23:04.429 }, 00:23:04.429 "method": "bdev_nvme_attach_controller" 00:23:04.429 },{ 00:23:04.429 "params": { 00:23:04.429 "name": "Nvme10", 00:23:04.429 "trtype": "tcp", 00:23:04.429 "traddr": "10.0.0.2", 00:23:04.429 "adrfam": "ipv4", 00:23:04.429 "trsvcid": "4420", 00:23:04.429 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:04.429 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:04.429 "hdgst": false, 00:23:04.429 "ddgst": false 00:23:04.429 }, 00:23:04.429 "method": "bdev_nvme_attach_controller" 00:23:04.429 }' 00:23:04.429 [2024-07-15 21:38:54.125172] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.429 [2024-07-15 21:38:54.189273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.814 Running I/O for 1 seconds... 00:23:07.201 00:23:07.201 Latency(us) 00:23:07.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.201 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.201 Verification LBA range: start 0x0 length 0x400 00:23:07.201 Nvme1n1 : 1.10 233.32 14.58 0.00 0.00 271336.53 22609.92 242920.11 00:23:07.201 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.201 Verification LBA range: start 0x0 length 0x400 00:23:07.201 Nvme2n1 : 1.09 234.48 14.66 0.00 0.00 265287.68 22282.24 265639.25 00:23:07.201 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.201 Verification LBA range: start 0x0 length 0x400 00:23:07.201 Nvme3n1 : 1.11 230.52 14.41 0.00 0.00 265120.00 22282.24 242920.11 00:23:07.201 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.201 Verification LBA range: start 0x0 length 0x400 00:23:07.201 Nvme4n1 : 1.10 231.74 14.48 0.00 0.00 259040.00 22937.60 244667.73 00:23:07.201 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.201 Verification LBA range: start 0x0 length 0x400 00:23:07.201 Nvme5n1 : 1.11 231.20 14.45 0.00 0.00 255012.69 41287.68 228939.09 00:23:07.201 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.201 Verification LBA range: start 0x0 length 0x400 00:23:07.201 Nvme6n1 : 1.16 220.16 13.76 0.00 0.00 260395.73 21517.65 249910.61 00:23:07.201 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.201 Verification LBA range: start 0x0 length 0x400 00:23:07.201 Nvme7n1 : 1.12 286.03 17.88 0.00 0.00 198673.41 11960.32 251658.24 00:23:07.201 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.201 Verification LBA range: start 0x0 length 0x400 00:23:07.201 Nvme8n1 : 1.18 271.75 16.98 0.00 0.00 206711.47 20097.71 246415.36 00:23:07.201 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.201 Verification LBA range: start 0x0 length 0x400 00:23:07.201 Nvme9n1 : 1.20 266.80 16.68 0.00 0.00 207279.27 15510.19 244667.73 00:23:07.201 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.201 Verification LBA range: start 0x0 length 0x400 00:23:07.201 Nvme10n1 : 1.21 264.74 16.55 0.00 0.00 205522.52 13926.40 265639.25 00:23:07.201 =================================================================================================================== 00:23:07.201 Total : 2470.76 154.42 0.00 0.00 236266.00 11960.32 265639.25 00:23:07.201 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:07.201 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:07.201 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:07.201 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:07.201 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:07.201 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:07.201 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:07.201 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.201 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:07.201 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.201 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.201 rmmod nvme_tcp 00:23:07.202 rmmod nvme_fabrics 00:23:07.202 rmmod nvme_keyring 00:23:07.202 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.202 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:07.202 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:07.202 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2250755 ']' 00:23:07.202 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2250755 00:23:07.202 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2250755 ']' 00:23:07.202 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2250755 00:23:07.202 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:07.202 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.202 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2250755 00:23:07.202 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:07.202 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:07.202 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2250755' 00:23:07.202 killing process with pid 2250755 00:23:07.202 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2250755 00:23:07.202 21:38:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2250755 00:23:07.463 21:38:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:07.463 21:38:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:07.463 21:38:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:07.463 21:38:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.463 21:38:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:07.463 21:38:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.463 21:38:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.463 21:38:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:10.008 00:23:10.008 real 0m16.377s 00:23:10.008 user 0m33.467s 00:23:10.008 sys 0m6.410s 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.008 ************************************ 00:23:10.008 END TEST nvmf_shutdown_tc1 00:23:10.008 ************************************ 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:10.008 ************************************ 00:23:10.008 START TEST nvmf_shutdown_tc2 00:23:10.008 ************************************ 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.008 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:10.009 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:10.009 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:10.009 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:10.009 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:10.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:23:10.009 00:23:10.009 --- 10.0.0.2 ping statistics --- 00:23:10.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.009 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:23:10.009 00:23:10.009 --- 10.0.0.1 ping statistics --- 00:23:10.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.009 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2252724 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2252724 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2252724 ']' 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:10.009 21:38:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.009 [2024-07-15 21:38:59.725386] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:10.009 [2024-07-15 21:38:59.725438] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.009 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.009 [2024-07-15 21:38:59.805108] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:10.271 [2024-07-15 21:38:59.860615] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.271 [2024-07-15 21:38:59.860648] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.271 [2024-07-15 21:38:59.860653] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.271 [2024-07-15 21:38:59.860658] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.271 [2024-07-15 21:38:59.860663] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.271 [2024-07-15 21:38:59.860771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.271 [2024-07-15 21:38:59.860929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:10.271 [2024-07-15 21:38:59.861082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.271 [2024-07-15 21:38:59.861084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.843 [2024-07-15 21:39:00.540547] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.843 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.843 Malloc1 00:23:10.843 [2024-07-15 21:39:00.635293] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.103 Malloc2 00:23:11.103 Malloc3 00:23:11.103 Malloc4 00:23:11.103 Malloc5 00:23:11.103 Malloc6 00:23:11.103 Malloc7 00:23:11.103 Malloc8 00:23:11.364 Malloc9 00:23:11.364 Malloc10 00:23:11.364 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.364 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:11.364 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:11.364 21:39:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.364 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2253164 00:23:11.364 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2253164 /var/tmp/bdevperf.sock 00:23:11.364 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2253164 ']' 00:23:11.364 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.365 { 00:23:11.365 "params": { 00:23:11.365 "name": "Nvme$subsystem", 00:23:11.365 "trtype": "$TEST_TRANSPORT", 00:23:11.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.365 "adrfam": "ipv4", 00:23:11.365 "trsvcid": "$NVMF_PORT", 00:23:11.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.365 "hdgst": ${hdgst:-false}, 00:23:11.365 "ddgst": ${ddgst:-false} 00:23:11.365 }, 00:23:11.365 "method": "bdev_nvme_attach_controller" 00:23:11.365 } 00:23:11.365 EOF 00:23:11.365 )") 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.365 { 00:23:11.365 "params": { 00:23:11.365 "name": "Nvme$subsystem", 00:23:11.365 "trtype": "$TEST_TRANSPORT", 00:23:11.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.365 "adrfam": "ipv4", 00:23:11.365 "trsvcid": "$NVMF_PORT", 00:23:11.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.365 "hdgst": ${hdgst:-false}, 00:23:11.365 "ddgst": ${ddgst:-false} 00:23:11.365 }, 00:23:11.365 "method": "bdev_nvme_attach_controller" 00:23:11.365 } 00:23:11.365 EOF 00:23:11.365 )") 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.365 { 00:23:11.365 "params": { 00:23:11.365 "name": "Nvme$subsystem", 00:23:11.365 "trtype": "$TEST_TRANSPORT", 00:23:11.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.365 "adrfam": "ipv4", 00:23:11.365 "trsvcid": "$NVMF_PORT", 00:23:11.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.365 "hdgst": ${hdgst:-false}, 00:23:11.365 "ddgst": ${ddgst:-false} 00:23:11.365 }, 00:23:11.365 "method": "bdev_nvme_attach_controller" 00:23:11.365 } 00:23:11.365 EOF 00:23:11.365 )") 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.365 { 00:23:11.365 "params": { 00:23:11.365 "name": "Nvme$subsystem", 00:23:11.365 "trtype": "$TEST_TRANSPORT", 00:23:11.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.365 "adrfam": "ipv4", 00:23:11.365 "trsvcid": "$NVMF_PORT", 00:23:11.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.365 "hdgst": ${hdgst:-false}, 00:23:11.365 "ddgst": ${ddgst:-false} 00:23:11.365 }, 00:23:11.365 "method": "bdev_nvme_attach_controller" 00:23:11.365 } 00:23:11.365 EOF 00:23:11.365 )") 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.365 { 00:23:11.365 "params": { 00:23:11.365 "name": "Nvme$subsystem", 00:23:11.365 "trtype": "$TEST_TRANSPORT", 00:23:11.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.365 "adrfam": "ipv4", 00:23:11.365 "trsvcid": "$NVMF_PORT", 00:23:11.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.365 "hdgst": ${hdgst:-false}, 00:23:11.365 "ddgst": ${ddgst:-false} 00:23:11.365 }, 00:23:11.365 "method": "bdev_nvme_attach_controller" 00:23:11.365 } 00:23:11.365 EOF 00:23:11.365 )") 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.365 { 00:23:11.365 "params": { 00:23:11.365 "name": "Nvme$subsystem", 00:23:11.365 "trtype": "$TEST_TRANSPORT", 00:23:11.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.365 "adrfam": "ipv4", 00:23:11.365 "trsvcid": "$NVMF_PORT", 00:23:11.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.365 "hdgst": ${hdgst:-false}, 00:23:11.365 "ddgst": ${ddgst:-false} 00:23:11.365 }, 00:23:11.365 "method": "bdev_nvme_attach_controller" 00:23:11.365 } 00:23:11.365 EOF 00:23:11.365 )") 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.365 [2024-07-15 21:39:01.083271] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:11.365 [2024-07-15 21:39:01.083328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2253164 ] 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.365 { 00:23:11.365 "params": { 00:23:11.365 "name": "Nvme$subsystem", 00:23:11.365 "trtype": "$TEST_TRANSPORT", 00:23:11.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.365 "adrfam": "ipv4", 00:23:11.365 "trsvcid": "$NVMF_PORT", 00:23:11.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.365 "hdgst": ${hdgst:-false}, 00:23:11.365 "ddgst": ${ddgst:-false} 00:23:11.365 }, 00:23:11.365 "method": "bdev_nvme_attach_controller" 00:23:11.365 } 00:23:11.365 EOF 00:23:11.365 )") 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.365 { 00:23:11.365 "params": { 00:23:11.365 "name": "Nvme$subsystem", 00:23:11.365 "trtype": "$TEST_TRANSPORT", 00:23:11.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.365 "adrfam": "ipv4", 00:23:11.365 "trsvcid": "$NVMF_PORT", 00:23:11.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.365 "hdgst": ${hdgst:-false}, 00:23:11.365 "ddgst": ${ddgst:-false} 00:23:11.365 }, 00:23:11.365 "method": "bdev_nvme_attach_controller" 00:23:11.365 } 00:23:11.365 EOF 00:23:11.365 )") 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.365 { 00:23:11.365 "params": { 00:23:11.365 "name": "Nvme$subsystem", 00:23:11.365 "trtype": "$TEST_TRANSPORT", 00:23:11.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.365 "adrfam": "ipv4", 00:23:11.365 "trsvcid": "$NVMF_PORT", 00:23:11.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.365 "hdgst": ${hdgst:-false}, 00:23:11.365 "ddgst": ${ddgst:-false} 00:23:11.365 }, 00:23:11.365 "method": "bdev_nvme_attach_controller" 00:23:11.365 } 00:23:11.365 EOF 00:23:11.365 )") 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.365 { 00:23:11.365 "params": { 00:23:11.365 "name": "Nvme$subsystem", 00:23:11.365 "trtype": "$TEST_TRANSPORT", 00:23:11.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.365 "adrfam": "ipv4", 00:23:11.365 "trsvcid": "$NVMF_PORT", 00:23:11.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.365 "hdgst": ${hdgst:-false}, 00:23:11.365 "ddgst": ${ddgst:-false} 00:23:11.365 }, 00:23:11.365 "method": "bdev_nvme_attach_controller" 00:23:11.365 } 00:23:11.365 EOF 00:23:11.365 )") 00:23:11.365 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:11.365 21:39:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:11.365 "params": { 00:23:11.366 "name": "Nvme1", 00:23:11.366 "trtype": "tcp", 00:23:11.366 "traddr": "10.0.0.2", 00:23:11.366 "adrfam": "ipv4", 00:23:11.366 "trsvcid": "4420", 00:23:11.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.366 "hdgst": false, 00:23:11.366 "ddgst": false 00:23:11.366 }, 00:23:11.366 "method": "bdev_nvme_attach_controller" 00:23:11.366 },{ 00:23:11.366 "params": { 00:23:11.366 "name": "Nvme2", 00:23:11.366 "trtype": "tcp", 00:23:11.366 "traddr": "10.0.0.2", 00:23:11.366 "adrfam": "ipv4", 00:23:11.366 "trsvcid": "4420", 00:23:11.366 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:11.366 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:11.366 "hdgst": false, 00:23:11.366 "ddgst": false 00:23:11.366 }, 00:23:11.366 "method": "bdev_nvme_attach_controller" 00:23:11.366 },{ 00:23:11.366 "params": { 00:23:11.366 "name": "Nvme3", 00:23:11.366 "trtype": "tcp", 00:23:11.366 "traddr": "10.0.0.2", 00:23:11.366 "adrfam": "ipv4", 00:23:11.366 "trsvcid": "4420", 00:23:11.366 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:11.366 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:11.366 "hdgst": false, 00:23:11.366 "ddgst": false 00:23:11.366 }, 00:23:11.366 "method": "bdev_nvme_attach_controller" 00:23:11.366 },{ 00:23:11.366 "params": { 00:23:11.366 "name": "Nvme4", 00:23:11.366 "trtype": "tcp", 00:23:11.366 "traddr": "10.0.0.2", 00:23:11.366 "adrfam": "ipv4", 00:23:11.366 "trsvcid": "4420", 00:23:11.366 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:11.366 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:11.366 "hdgst": false, 00:23:11.366 "ddgst": false 00:23:11.366 }, 00:23:11.366 "method": "bdev_nvme_attach_controller" 00:23:11.366 },{ 00:23:11.366 "params": { 00:23:11.366 "name": "Nvme5", 00:23:11.366 "trtype": "tcp", 00:23:11.366 "traddr": "10.0.0.2", 00:23:11.366 "adrfam": "ipv4", 00:23:11.366 "trsvcid": "4420", 00:23:11.366 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:11.366 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:11.366 "hdgst": false, 00:23:11.366 "ddgst": false 00:23:11.366 }, 00:23:11.366 "method": "bdev_nvme_attach_controller" 00:23:11.366 },{ 00:23:11.366 "params": { 00:23:11.366 "name": "Nvme6", 00:23:11.366 "trtype": "tcp", 00:23:11.366 "traddr": "10.0.0.2", 00:23:11.366 "adrfam": "ipv4", 00:23:11.366 "trsvcid": "4420", 00:23:11.366 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:11.366 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:11.366 "hdgst": false, 00:23:11.366 "ddgst": false 00:23:11.366 }, 00:23:11.366 "method": "bdev_nvme_attach_controller" 00:23:11.366 },{ 00:23:11.366 "params": { 00:23:11.366 "name": "Nvme7", 00:23:11.366 "trtype": "tcp", 00:23:11.366 "traddr": "10.0.0.2", 00:23:11.366 "adrfam": "ipv4", 00:23:11.366 "trsvcid": "4420", 00:23:11.366 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:11.366 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:11.366 "hdgst": false, 00:23:11.366 "ddgst": false 00:23:11.366 }, 00:23:11.366 "method": "bdev_nvme_attach_controller" 00:23:11.366 },{ 00:23:11.366 "params": { 00:23:11.366 "name": "Nvme8", 00:23:11.366 "trtype": "tcp", 00:23:11.366 "traddr": "10.0.0.2", 00:23:11.366 "adrfam": "ipv4", 00:23:11.366 "trsvcid": "4420", 00:23:11.366 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:11.366 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:11.366 "hdgst": false, 00:23:11.366 "ddgst": false 00:23:11.366 }, 00:23:11.366 "method": "bdev_nvme_attach_controller" 00:23:11.366 },{ 00:23:11.366 "params": { 00:23:11.366 "name": "Nvme9", 00:23:11.366 "trtype": "tcp", 00:23:11.366 "traddr": "10.0.0.2", 00:23:11.366 "adrfam": "ipv4", 00:23:11.366 "trsvcid": "4420", 00:23:11.366 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:11.366 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:11.366 "hdgst": false, 00:23:11.366 "ddgst": false 00:23:11.366 }, 00:23:11.366 "method": "bdev_nvme_attach_controller" 00:23:11.366 },{ 00:23:11.366 "params": { 00:23:11.366 "name": "Nvme10", 00:23:11.366 "trtype": "tcp", 00:23:11.366 "traddr": "10.0.0.2", 00:23:11.366 "adrfam": "ipv4", 00:23:11.366 "trsvcid": "4420", 00:23:11.366 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:11.366 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:11.366 "hdgst": false, 00:23:11.366 "ddgst": false 00:23:11.366 }, 00:23:11.366 "method": "bdev_nvme_attach_controller" 00:23:11.366 }' 00:23:11.366 [2024-07-15 21:39:01.142833] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.626 [2024-07-15 21:39:01.208181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.007 Running I/O for 10 seconds... 00:23:13.008 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.008 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:13.008 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:13.008 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.008 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.268 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.268 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:13.268 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:13.268 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:13.269 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:13.269 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:13.269 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:13.269 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:13.269 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:13.269 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:13.269 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.269 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.269 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.269 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:13.269 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:13.269 21:39:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:13.528 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:13.528 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:13.528 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:13.528 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:13.528 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.529 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.529 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.529 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:13.529 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:13.529 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2253164 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2253164 ']' 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2253164 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:13.789 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2253164 00:23:14.049 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:14.049 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:14.049 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2253164' 00:23:14.049 killing process with pid 2253164 00:23:14.049 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2253164 00:23:14.049 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2253164 00:23:14.049 Received shutdown signal, test time was about 0.980416 seconds 00:23:14.049 00:23:14.049 Latency(us) 00:23:14.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.049 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.049 Verification LBA range: start 0x0 length 0x400 00:23:14.049 Nvme1n1 : 0.97 263.71 16.48 0.00 0.00 239769.17 23702.19 221074.77 00:23:14.049 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.049 Verification LBA range: start 0x0 length 0x400 00:23:14.049 Nvme2n1 : 0.97 263.19 16.45 0.00 0.00 235642.24 21845.33 244667.73 00:23:14.049 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.049 Verification LBA range: start 0x0 length 0x400 00:23:14.049 Nvme3n1 : 0.96 266.90 16.68 0.00 0.00 227496.96 22173.01 241172.48 00:23:14.049 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.049 Verification LBA range: start 0x0 length 0x400 00:23:14.049 Nvme4n1 : 0.94 203.98 12.75 0.00 0.00 291128.89 36918.61 249910.61 00:23:14.049 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.049 Verification LBA range: start 0x0 length 0x400 00:23:14.049 Nvme5n1 : 0.95 268.78 16.80 0.00 0.00 216386.56 16820.91 269134.51 00:23:14.049 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.049 Verification LBA range: start 0x0 length 0x400 00:23:14.049 Nvme6n1 : 0.94 204.98 12.81 0.00 0.00 276757.05 20534.61 253405.87 00:23:14.049 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.049 Verification LBA range: start 0x0 length 0x400 00:23:14.049 Nvme7n1 : 0.96 266.14 16.63 0.00 0.00 209230.51 22391.47 262144.00 00:23:14.049 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.049 Verification LBA range: start 0x0 length 0x400 00:23:14.049 Nvme8n1 : 0.95 203.08 12.69 0.00 0.00 267246.93 22719.15 251658.24 00:23:14.049 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.049 Verification LBA range: start 0x0 length 0x400 00:23:14.049 Nvme9n1 : 0.98 259.31 16.21 0.00 0.00 205356.02 23265.28 241172.48 00:23:14.049 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.049 Verification LBA range: start 0x0 length 0x400 00:23:14.049 Nvme10n1 : 0.95 201.06 12.57 0.00 0.00 257896.11 37137.07 276125.01 00:23:14.049 =================================================================================================================== 00:23:14.049 Total : 2401.13 150.07 0.00 0.00 239324.29 16820.91 276125.01 00:23:14.308 21:39:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2252724 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:15.246 rmmod nvme_tcp 00:23:15.246 rmmod nvme_fabrics 00:23:15.246 rmmod nvme_keyring 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2252724 ']' 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2252724 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2252724 ']' 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2252724 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.246 21:39:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2252724 00:23:15.246 21:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:15.246 21:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:15.246 21:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2252724' 00:23:15.246 killing process with pid 2252724 00:23:15.246 21:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2252724 00:23:15.246 21:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2252724 00:23:15.506 21:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:15.506 21:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:15.506 21:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:15.506 21:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:15.506 21:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:15.506 21:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.506 21:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.506 21:39:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.051 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:18.051 00:23:18.051 real 0m7.978s 00:23:18.051 user 0m24.284s 00:23:18.051 sys 0m1.283s 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.052 ************************************ 00:23:18.052 END TEST nvmf_shutdown_tc2 00:23:18.052 ************************************ 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:18.052 ************************************ 00:23:18.052 START TEST nvmf_shutdown_tc3 00:23:18.052 ************************************ 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:18.052 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:18.052 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:18.052 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:18.052 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.052 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:18.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:23:18.053 00:23:18.053 --- 10.0.0.2 ping statistics --- 00:23:18.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.053 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:23:18.053 00:23:18.053 --- 10.0.0.1 ping statistics --- 00:23:18.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.053 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2254684 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2254684 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2254684 ']' 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.053 21:39:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.330 [2024-07-15 21:39:07.856993] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:18.330 [2024-07-15 21:39:07.857059] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.330 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.330 [2024-07-15 21:39:07.942268] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:18.330 [2024-07-15 21:39:08.003736] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.330 [2024-07-15 21:39:08.003767] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.330 [2024-07-15 21:39:08.003773] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.330 [2024-07-15 21:39:08.003777] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.330 [2024-07-15 21:39:08.003781] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.330 [2024-07-15 21:39:08.003893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.330 [2024-07-15 21:39:08.004057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.330 [2024-07-15 21:39:08.004216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.330 [2024-07-15 21:39:08.004218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.912 [2024-07-15 21:39:08.668468] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.912 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.191 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.191 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.191 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.191 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:19.191 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:19.191 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.191 21:39:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.191 Malloc1 00:23:19.191 [2024-07-15 21:39:08.763107] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.191 Malloc2 00:23:19.191 Malloc3 00:23:19.191 Malloc4 00:23:19.191 Malloc5 00:23:19.191 Malloc6 00:23:19.191 Malloc7 00:23:19.452 Malloc8 00:23:19.452 Malloc9 00:23:19.452 Malloc10 00:23:19.452 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.452 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:19.452 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:19.452 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.452 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2255046 00:23:19.452 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2255046 /var/tmp/bdevperf.sock 00:23:19.452 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2255046 ']' 00:23:19.452 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.453 { 00:23:19.453 "params": { 00:23:19.453 "name": "Nvme$subsystem", 00:23:19.453 "trtype": "$TEST_TRANSPORT", 00:23:19.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.453 "adrfam": "ipv4", 00:23:19.453 "trsvcid": "$NVMF_PORT", 00:23:19.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.453 "hdgst": ${hdgst:-false}, 00:23:19.453 "ddgst": ${ddgst:-false} 00:23:19.453 }, 00:23:19.453 "method": "bdev_nvme_attach_controller" 00:23:19.453 } 00:23:19.453 EOF 00:23:19.453 )") 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.453 { 00:23:19.453 "params": { 00:23:19.453 "name": "Nvme$subsystem", 00:23:19.453 "trtype": "$TEST_TRANSPORT", 00:23:19.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.453 "adrfam": "ipv4", 00:23:19.453 "trsvcid": "$NVMF_PORT", 00:23:19.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.453 "hdgst": ${hdgst:-false}, 00:23:19.453 "ddgst": ${ddgst:-false} 00:23:19.453 }, 00:23:19.453 "method": "bdev_nvme_attach_controller" 00:23:19.453 } 00:23:19.453 EOF 00:23:19.453 )") 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.453 { 00:23:19.453 "params": { 00:23:19.453 "name": "Nvme$subsystem", 00:23:19.453 "trtype": "$TEST_TRANSPORT", 00:23:19.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.453 "adrfam": "ipv4", 00:23:19.453 "trsvcid": "$NVMF_PORT", 00:23:19.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.453 "hdgst": ${hdgst:-false}, 00:23:19.453 "ddgst": ${ddgst:-false} 00:23:19.453 }, 00:23:19.453 "method": "bdev_nvme_attach_controller" 00:23:19.453 } 00:23:19.453 EOF 00:23:19.453 )") 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.453 { 00:23:19.453 "params": { 00:23:19.453 "name": "Nvme$subsystem", 00:23:19.453 "trtype": "$TEST_TRANSPORT", 00:23:19.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.453 "adrfam": "ipv4", 00:23:19.453 "trsvcid": "$NVMF_PORT", 00:23:19.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.453 "hdgst": ${hdgst:-false}, 00:23:19.453 "ddgst": ${ddgst:-false} 00:23:19.453 }, 00:23:19.453 "method": "bdev_nvme_attach_controller" 00:23:19.453 } 00:23:19.453 EOF 00:23:19.453 )") 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.453 { 00:23:19.453 "params": { 00:23:19.453 "name": "Nvme$subsystem", 00:23:19.453 "trtype": "$TEST_TRANSPORT", 00:23:19.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.453 "adrfam": "ipv4", 00:23:19.453 "trsvcid": "$NVMF_PORT", 00:23:19.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.453 "hdgst": ${hdgst:-false}, 00:23:19.453 "ddgst": ${ddgst:-false} 00:23:19.453 }, 00:23:19.453 "method": "bdev_nvme_attach_controller" 00:23:19.453 } 00:23:19.453 EOF 00:23:19.453 )") 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.453 { 00:23:19.453 "params": { 00:23:19.453 "name": "Nvme$subsystem", 00:23:19.453 "trtype": "$TEST_TRANSPORT", 00:23:19.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.453 "adrfam": "ipv4", 00:23:19.453 "trsvcid": "$NVMF_PORT", 00:23:19.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.453 "hdgst": ${hdgst:-false}, 00:23:19.453 "ddgst": ${ddgst:-false} 00:23:19.453 }, 00:23:19.453 "method": "bdev_nvme_attach_controller" 00:23:19.453 } 00:23:19.453 EOF 00:23:19.453 )") 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.453 [2024-07-15 21:39:09.207746] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:19.453 [2024-07-15 21:39:09.207802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255046 ] 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.453 { 00:23:19.453 "params": { 00:23:19.453 "name": "Nvme$subsystem", 00:23:19.453 "trtype": "$TEST_TRANSPORT", 00:23:19.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.453 "adrfam": "ipv4", 00:23:19.453 "trsvcid": "$NVMF_PORT", 00:23:19.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.453 "hdgst": ${hdgst:-false}, 00:23:19.453 "ddgst": ${ddgst:-false} 00:23:19.453 }, 00:23:19.453 "method": "bdev_nvme_attach_controller" 00:23:19.453 } 00:23:19.453 EOF 00:23:19.453 )") 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.453 { 00:23:19.453 "params": { 00:23:19.453 "name": "Nvme$subsystem", 00:23:19.453 "trtype": "$TEST_TRANSPORT", 00:23:19.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.453 "adrfam": "ipv4", 00:23:19.453 "trsvcid": "$NVMF_PORT", 00:23:19.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.453 "hdgst": ${hdgst:-false}, 00:23:19.453 "ddgst": ${ddgst:-false} 00:23:19.453 }, 00:23:19.453 "method": "bdev_nvme_attach_controller" 00:23:19.453 } 00:23:19.453 EOF 00:23:19.453 )") 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.453 { 00:23:19.453 "params": { 00:23:19.453 "name": "Nvme$subsystem", 00:23:19.453 "trtype": "$TEST_TRANSPORT", 00:23:19.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.453 "adrfam": "ipv4", 00:23:19.453 "trsvcid": "$NVMF_PORT", 00:23:19.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.453 "hdgst": ${hdgst:-false}, 00:23:19.453 "ddgst": ${ddgst:-false} 00:23:19.453 }, 00:23:19.453 "method": "bdev_nvme_attach_controller" 00:23:19.453 } 00:23:19.453 EOF 00:23:19.453 )") 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.453 { 00:23:19.453 "params": { 00:23:19.453 "name": "Nvme$subsystem", 00:23:19.453 "trtype": "$TEST_TRANSPORT", 00:23:19.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.453 "adrfam": "ipv4", 00:23:19.453 "trsvcid": "$NVMF_PORT", 00:23:19.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.453 "hdgst": ${hdgst:-false}, 00:23:19.453 "ddgst": ${ddgst:-false} 00:23:19.453 }, 00:23:19.453 "method": "bdev_nvme_attach_controller" 00:23:19.453 } 00:23:19.453 EOF 00:23:19.453 )") 00:23:19.453 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:19.453 21:39:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:19.453 "params": { 00:23:19.453 "name": "Nvme1", 00:23:19.454 "trtype": "tcp", 00:23:19.454 "traddr": "10.0.0.2", 00:23:19.454 "adrfam": "ipv4", 00:23:19.454 "trsvcid": "4420", 00:23:19.454 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.454 "hdgst": false, 00:23:19.454 "ddgst": false 00:23:19.454 }, 00:23:19.454 "method": "bdev_nvme_attach_controller" 00:23:19.454 },{ 00:23:19.454 "params": { 00:23:19.454 "name": "Nvme2", 00:23:19.454 "trtype": "tcp", 00:23:19.454 "traddr": "10.0.0.2", 00:23:19.454 "adrfam": "ipv4", 00:23:19.454 "trsvcid": "4420", 00:23:19.454 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:19.454 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:19.454 "hdgst": false, 00:23:19.454 "ddgst": false 00:23:19.454 }, 00:23:19.454 "method": "bdev_nvme_attach_controller" 00:23:19.454 },{ 00:23:19.454 "params": { 00:23:19.454 "name": "Nvme3", 00:23:19.454 "trtype": "tcp", 00:23:19.454 "traddr": "10.0.0.2", 00:23:19.454 "adrfam": "ipv4", 00:23:19.454 "trsvcid": "4420", 00:23:19.454 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:19.454 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:19.454 "hdgst": false, 00:23:19.454 "ddgst": false 00:23:19.454 }, 00:23:19.454 "method": "bdev_nvme_attach_controller" 00:23:19.454 },{ 00:23:19.454 "params": { 00:23:19.454 "name": "Nvme4", 00:23:19.454 "trtype": "tcp", 00:23:19.454 "traddr": "10.0.0.2", 00:23:19.454 "adrfam": "ipv4", 00:23:19.454 "trsvcid": "4420", 00:23:19.454 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:19.454 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:19.454 "hdgst": false, 00:23:19.454 "ddgst": false 00:23:19.454 }, 00:23:19.454 "method": "bdev_nvme_attach_controller" 00:23:19.454 },{ 00:23:19.454 "params": { 00:23:19.454 "name": "Nvme5", 00:23:19.454 "trtype": "tcp", 00:23:19.454 "traddr": "10.0.0.2", 00:23:19.454 "adrfam": "ipv4", 00:23:19.454 "trsvcid": "4420", 00:23:19.454 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:19.454 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:19.454 "hdgst": false, 00:23:19.454 "ddgst": false 00:23:19.454 }, 00:23:19.454 "method": "bdev_nvme_attach_controller" 00:23:19.454 },{ 00:23:19.454 "params": { 00:23:19.454 "name": "Nvme6", 00:23:19.454 "trtype": "tcp", 00:23:19.454 "traddr": "10.0.0.2", 00:23:19.454 "adrfam": "ipv4", 00:23:19.454 "trsvcid": "4420", 00:23:19.454 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:19.454 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:19.454 "hdgst": false, 00:23:19.454 "ddgst": false 00:23:19.454 }, 00:23:19.454 "method": "bdev_nvme_attach_controller" 00:23:19.454 },{ 00:23:19.454 "params": { 00:23:19.454 "name": "Nvme7", 00:23:19.454 "trtype": "tcp", 00:23:19.454 "traddr": "10.0.0.2", 00:23:19.454 "adrfam": "ipv4", 00:23:19.454 "trsvcid": "4420", 00:23:19.454 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:19.454 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:19.454 "hdgst": false, 00:23:19.454 "ddgst": false 00:23:19.454 }, 00:23:19.454 "method": "bdev_nvme_attach_controller" 00:23:19.454 },{ 00:23:19.454 "params": { 00:23:19.454 "name": "Nvme8", 00:23:19.454 "trtype": "tcp", 00:23:19.454 "traddr": "10.0.0.2", 00:23:19.454 "adrfam": "ipv4", 00:23:19.454 "trsvcid": "4420", 00:23:19.454 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:19.454 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:19.454 "hdgst": false, 00:23:19.454 "ddgst": false 00:23:19.454 }, 00:23:19.454 "method": "bdev_nvme_attach_controller" 00:23:19.454 },{ 00:23:19.454 "params": { 00:23:19.454 "name": "Nvme9", 00:23:19.454 "trtype": "tcp", 00:23:19.454 "traddr": "10.0.0.2", 00:23:19.454 "adrfam": "ipv4", 00:23:19.454 "trsvcid": "4420", 00:23:19.454 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:19.454 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:19.454 "hdgst": false, 00:23:19.454 "ddgst": false 00:23:19.454 }, 00:23:19.454 "method": "bdev_nvme_attach_controller" 00:23:19.454 },{ 00:23:19.454 "params": { 00:23:19.454 "name": "Nvme10", 00:23:19.454 "trtype": "tcp", 00:23:19.454 "traddr": "10.0.0.2", 00:23:19.454 "adrfam": "ipv4", 00:23:19.454 "trsvcid": "4420", 00:23:19.454 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:19.454 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:19.454 "hdgst": false, 00:23:19.454 "ddgst": false 00:23:19.454 }, 00:23:19.454 "method": "bdev_nvme_attach_controller" 00:23:19.454 }' 00:23:19.715 [2024-07-15 21:39:09.267330] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.715 [2024-07-15 21:39:09.332383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.100 Running I/O for 10 seconds... 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:21.100 21:39:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:21.360 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:21.360 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:21.360 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:21.360 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:21.360 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.360 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.620 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.620 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:21.620 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:21.620 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2254684 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2254684 ']' 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2254684 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2254684 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2254684' 00:23:21.897 killing process with pid 2254684 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2254684 00:23:21.897 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2254684 00:23:21.897 [2024-07-15 21:39:11.557638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9f90 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.557680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9f90 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.557685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9f90 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.557691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9f90 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.557700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9f90 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.557705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9f90 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.557709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9f90 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.557714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9f90 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.557718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9f90 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.557723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9f90 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.557727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9f90 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.557732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce9f90 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558460] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558470] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558568] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.897 [2024-07-15 21:39:11.558577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558587] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558596] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558605] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558610] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.558739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecbd0 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559672] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559755] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559814] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.898 [2024-07-15 21:39:11.559871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.559875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.559879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.559883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.559888] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.559892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.559897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.559901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.559905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.559909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea470 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.560965] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.560986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.560995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561018] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561036] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561112] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561141] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561194] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.561272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea970 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.562016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceae50 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.562038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceae50 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.562414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.899 [2024-07-15 21:39:11.562437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.562448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.899 [2024-07-15 21:39:11.562452] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.562457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.562459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.899 [2024-07-15 21:39:11.562462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.562467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 21:39:11.562468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.899 the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.562476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.562478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.899 [2024-07-15 21:39:11.562481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.562486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 21:39:11.562487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.899 the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.562494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.899 [2024-07-15 21:39:11.562496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.899 [2024-07-15 21:39:11.562499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 21:39:11.562505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.900 the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26c7770 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562546] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with [2024-07-15 21:39:11.562545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:23:21.900 id:0 cdw10:00000000 cdw11:00000000 00:23:21.900 [2024-07-15 21:39:11.562553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.900 [2024-07-15 21:39:11.562558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.900 [2024-07-15 21:39:11.562568] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.900 [2024-07-15 21:39:11.562574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.900 [2024-07-15 21:39:11.562583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 21:39:11.562589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.900 the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.900 [2024-07-15 21:39:11.562600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.900 [2024-07-15 21:39:11.562608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with [2024-07-15 21:39:11.562614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2893480 is same the state(5) to be set 00:23:21.900 with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562626] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.900 [2024-07-15 21:39:11.562655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with [2024-07-15 21:39:11.562660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:21.900 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.900 [2024-07-15 21:39:11.562668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.900 [2024-07-15 21:39:11.562673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.900 [2024-07-15 21:39:11.562684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-15 21:39:11.562689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:21.900 the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.900 [2024-07-15 21:39:11.562701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-15 21:39:11.562707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:21.900 the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.900 [2024-07-15 21:39:11.562720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d6480 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562740] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-15 21:39:11.562749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:21.900 the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.900 [2024-07-15 21:39:11.562761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.900 [2024-07-15 21:39:11.562766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.901 [2024-07-15 21:39:11.562766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.901 [2024-07-15 21:39:11.562772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.901 [2024-07-15 21:39:11.562775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.562777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.901 [2024-07-15 21:39:11.562782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.901 [2024-07-15 21:39:11.562783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.901 [2024-07-15 21:39:11.562786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with the state(5) to be set 00:23:21.901 [2024-07-15 21:39:11.562791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb330 is same with [2024-07-15 21:39:11.562791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:21.901 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.562801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.901 [2024-07-15 21:39:11.562810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.562818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f5fb0 is same with the state(5) to be set 00:23:21.901 [2024-07-15 21:39:11.562848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.901 [2024-07-15 21:39:11.562856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.562865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.901 [2024-07-15 21:39:11.562872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.562880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.901 [2024-07-15 21:39:11.562887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.562895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.901 [2024-07-15 21:39:11.562902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.562909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x270df00 is same with the state(5) to be set 00:23:21.901 [2024-07-15 21:39:11.562962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.562976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.562992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.563000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.563009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.563016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.563025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.563032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.563041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.563048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.563058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.563064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.563073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.563080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.563090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.563099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.563108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.563115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.563133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.563141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.563151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.563158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.563167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.563174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.563184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.563191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.563200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.563208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.563216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.563223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.563232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.563239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.563249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.901 [2024-07-15 21:39:11.563256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.901 [2024-07-15 21:39:11.563265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with [2024-07-15 21:39:11.563549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:21.902 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 [2024-07-15 21:39:11.563560] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.902 [2024-07-15 21:39:11.563562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.902 [2024-07-15 21:39:11.563565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.902 [2024-07-15 21:39:11.563569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 21:39:11.563570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.902 the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.903 [2024-07-15 21:39:11.563582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.903 [2024-07-15 21:39:11.563593] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.903 [2024-07-15 21:39:11.563603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.903 [2024-07-15 21:39:11.563608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.903 [2024-07-15 21:39:11.563618] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with [2024-07-15 21:39:11.563623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:21.903 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.903 [2024-07-15 21:39:11.563633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with [2024-07-15 21:39:11.563638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1the state(5) to be set 00:23:21.903 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.903 [2024-07-15 21:39:11.563645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.903 [2024-07-15 21:39:11.563650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.903 [2024-07-15 21:39:11.563660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 21:39:11.563665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.903 the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563672] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.903 [2024-07-15 21:39:11.563676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.903 [2024-07-15 21:39:11.563687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.903 [2024-07-15 21:39:11.563697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.903 [2024-07-15 21:39:11.563702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1[2024-07-15 21:39:11.563712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.903 the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.903 [2024-07-15 21:39:11.563724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.903 [2024-07-15 21:39:11.563734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 21:39:11.563739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.903 the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.903 [2024-07-15 21:39:11.563751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.903 [2024-07-15 21:39:11.563762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.903 [2024-07-15 21:39:11.563771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.903 [2024-07-15 21:39:11.563774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.903 [2024-07-15 21:39:11.563777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.563788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 [2024-07-15 21:39:11.563793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.563803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 [2024-07-15 21:39:11.563815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with [2024-07-15 21:39:11.563820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1the state(5) to be set 00:23:21.904 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.563827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 [2024-07-15 21:39:11.563832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.563842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with [2024-07-15 21:39:11.563847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:21.904 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 [2024-07-15 21:39:11.563854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with [2024-07-15 21:39:11.563859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1the state(5) to be set 00:23:21.904 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.563866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 [2024-07-15 21:39:11.563871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.563881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 21:39:11.563886] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb830 is same with the state(5) to be set 00:23:21.904 [2024-07-15 21:39:11.563897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.563908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 [2024-07-15 21:39:11.563917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.563925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 [2024-07-15 21:39:11.563934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.563941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 [2024-07-15 21:39:11.563950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.563956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 [2024-07-15 21:39:11.563965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.563972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 [2024-07-15 21:39:11.563981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.563988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 [2024-07-15 21:39:11.563997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.564003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 [2024-07-15 21:39:11.564012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.564019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 [2024-07-15 21:39:11.564028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.564035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 [2024-07-15 21:39:11.564043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.564051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 [2024-07-15 21:39:11.564059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.904 [2024-07-15 21:39:11.564066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.904 [2024-07-15 21:39:11.564116] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x287b260 was disconnected and freed. reset controller. 00:23:21.905 [2024-07-15 21:39:11.564280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564613] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.905 [2024-07-15 21:39:11.564623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.905 [2024-07-15 21:39:11.564634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with [2024-07-15 21:39:11.564633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1the state(5) to be set 00:23:21.905 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.905 [2024-07-15 21:39:11.564643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.905 [2024-07-15 21:39:11.564651] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.905 [2024-07-15 21:39:11.564652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.905 [2024-07-15 21:39:11.564660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.905 [2024-07-15 21:39:11.564662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.905 [2024-07-15 21:39:11.564669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.905 [2024-07-15 21:39:11.564671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.905 [2024-07-15 21:39:11.564674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.906 [2024-07-15 21:39:11.564684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.906 [2024-07-15 21:39:11.564694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.906 [2024-07-15 21:39:11.564699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.906 [2024-07-15 21:39:11.564709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.906 [2024-07-15 21:39:11.564719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.906 [2024-07-15 21:39:11.564729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.906 [2024-07-15 21:39:11.564733] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.906 [2024-07-15 21:39:11.564744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with [2024-07-15 21:39:11.564749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:21.906 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.906 [2024-07-15 21:39:11.564758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1[2024-07-15 21:39:11.564764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.906 the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.906 [2024-07-15 21:39:11.564776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.906 [2024-07-15 21:39:11.564785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 21:39:11.564790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.906 the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.906 [2024-07-15 21:39:11.564802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.906 [2024-07-15 21:39:11.564813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.906 [2024-07-15 21:39:11.564823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.906 [2024-07-15 21:39:11.564828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.906 [2024-07-15 21:39:11.564838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 21:39:11.564845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.906 the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.906 [2024-07-15 21:39:11.564858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with [2024-07-15 21:39:11.564863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:21.906 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.906 [2024-07-15 21:39:11.564871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128[2024-07-15 21:39:11.564876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.906 the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.906 [2024-07-15 21:39:11.564884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.564888] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.564893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.564893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.564898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.564901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.564903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.564908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.564911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.564913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.564918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.564918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.564924] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.564928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.564929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.564933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.564938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.564938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.564944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.564950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with [2024-07-15 21:39:11.564949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128the state(5) to be set 00:23:21.907 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.564957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.564958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.564962] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cebd10 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.564968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.564975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.564984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.564991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.565000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.565008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.565017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.565024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.565033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.565039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.565049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.565056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.565065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.565072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.565101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.565155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.565200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.565244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.565297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.565341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.565388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.565433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.565482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.565526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.565535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec1f0 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.565574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.565663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.907 [2024-07-15 21:39:11.565711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.907 [2024-07-15 21:39:11.565841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.565855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.565860] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.565864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.565895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.565941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.565986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.566031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.566077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.907 [2024-07-15 21:39:11.566126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566220] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566365] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.566966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567220] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567587] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.567872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.568183] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.568235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.568294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.568342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.568388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.568439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.568485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.568534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.568580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.908 [2024-07-15 21:39:11.568628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.909 [2024-07-15 21:39:11.568677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.909 [2024-07-15 21:39:11.568729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.909 [2024-07-15 21:39:11.568776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.909 [2024-07-15 21:39:11.568823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.909 [2024-07-15 21:39:11.568869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.909 [2024-07-15 21:39:11.568917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec6d0 is same with the state(5) to be set 00:23:21.909 [2024-07-15 21:39:11.580919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.580972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.909 [2024-07-15 21:39:11.580982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.580992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.909 [2024-07-15 21:39:11.581000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.581009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.909 [2024-07-15 21:39:11.581021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.581031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.909 [2024-07-15 21:39:11.581038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.581048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.909 [2024-07-15 21:39:11.581054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.581064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.909 [2024-07-15 21:39:11.581071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.581080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.909 [2024-07-15 21:39:11.581087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.581096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.909 [2024-07-15 21:39:11.581104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.581114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.909 [2024-07-15 21:39:11.581121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.581138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.909 [2024-07-15 21:39:11.581146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.581156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.909 [2024-07-15 21:39:11.581163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.581234] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26c1200 was disconnected and freed. reset controller. 00:23:21.909 [2024-07-15 21:39:11.581902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.909 [2024-07-15 21:39:11.581922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.581932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.909 [2024-07-15 21:39:11.581939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.581947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.909 [2024-07-15 21:39:11.581954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.581962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.909 [2024-07-15 21:39:11.581973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.581980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2794dd0 is same with the state(5) to be set 00:23:21.909 [2024-07-15 21:39:11.582008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.909 [2024-07-15 21:39:11.582016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.582024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.909 [2024-07-15 21:39:11.582031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.582039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.909 [2024-07-15 21:39:11.582047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.582054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.909 [2024-07-15 21:39:11.582061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.582068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2761e50 is same with the state(5) to be set 00:23:21.909 [2024-07-15 21:39:11.582088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26c7770 (9): Bad file descriptor 00:23:21.909 [2024-07-15 21:39:11.582102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2893480 (9): Bad file descriptor 00:23:21.909 [2024-07-15 21:39:11.582132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.909 [2024-07-15 21:39:11.582141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.582149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.909 [2024-07-15 21:39:11.582156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.582164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.909 [2024-07-15 21:39:11.582171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.582178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.909 [2024-07-15 21:39:11.582185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.582192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x278bd90 is same with the state(5) to be set 00:23:21.909 [2024-07-15 21:39:11.582217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.909 [2024-07-15 21:39:11.582225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.582233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.909 [2024-07-15 21:39:11.582240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.582250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.909 [2024-07-15 21:39:11.582257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.582264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.909 [2024-07-15 21:39:11.582271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.909 [2024-07-15 21:39:11.582278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x278bbb0 is same with the state(5) to be set 00:23:21.909 [2024-07-15 21:39:11.582294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26d6480 (9): Bad file descriptor 00:23:21.909 [2024-07-15 21:39:11.582310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26f5fb0 (9): Bad file descriptor 00:23:21.910 [2024-07-15 21:39:11.582330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.910 [2024-07-15 21:39:11.582341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.582354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.910 [2024-07-15 21:39:11.582365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.582375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.910 [2024-07-15 21:39:11.582383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.582391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.910 [2024-07-15 21:39:11.582398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.582405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2793b80 is same with the state(5) to be set 00:23:21.910 [2024-07-15 21:39:11.582420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x270df00 (9): Bad file descriptor 00:23:21.910 [2024-07-15 21:39:11.585061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.910 [2024-07-15 21:39:11.585623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.910 [2024-07-15 21:39:11.585633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.585988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.585995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.586004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.586011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.586020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.586027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.586036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.586043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.586052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.586059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.586067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.586074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.586083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.586090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.586099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.586106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.586115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.911 [2024-07-15 21:39:11.586126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.911 [2024-07-15 21:39:11.586182] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x319cdb0 was disconnected and freed. reset controller. 00:23:21.911 [2024-07-15 21:39:11.586294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:21.911 [2024-07-15 21:39:11.586311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:21.911 [2024-07-15 21:39:11.588322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.911 [2024-07-15 21:39:11.588362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26c7770 with addr=10.0.0.2, port=4420 00:23:21.911 [2024-07-15 21:39:11.588375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26c7770 is same with the state(5) to be set 00:23:21.911 [2024-07-15 21:39:11.588837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.911 [2024-07-15 21:39:11.588856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26f5fb0 with addr=10.0.0.2, port=4420 00:23:21.911 [2024-07-15 21:39:11.588863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f5fb0 is same with the state(5) to be set 00:23:21.911 [2024-07-15 21:39:11.589197] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.911 [2024-07-15 21:39:11.589242] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.911 [2024-07-15 21:39:11.589712] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.911 [2024-07-15 21:39:11.589733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:21.911 [2024-07-15 21:39:11.589751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2794dd0 (9): Bad file descriptor 00:23:21.911 [2024-07-15 21:39:11.589765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26c7770 (9): Bad file descriptor 00:23:21.911 [2024-07-15 21:39:11.589775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26f5fb0 (9): Bad file descriptor 00:23:21.911 [2024-07-15 21:39:11.589858] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.911 [2024-07-15 21:39:11.589898] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.911 [2024-07-15 21:39:11.589933] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.911 [2024-07-15 21:39:11.590237] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.911 [2024-07-15 21:39:11.590270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:21.911 [2024-07-15 21:39:11.590279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:21.911 [2024-07-15 21:39:11.590288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:21.911 [2024-07-15 21:39:11.590304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:21.911 [2024-07-15 21:39:11.590310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:21.911 [2024-07-15 21:39:11.590318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:21.911 [2024-07-15 21:39:11.590398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.911 [2024-07-15 21:39:11.590408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.911 [2024-07-15 21:39:11.590836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.912 [2024-07-15 21:39:11.590849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2794dd0 with addr=10.0.0.2, port=4420 00:23:21.912 [2024-07-15 21:39:11.590856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2794dd0 is same with the state(5) to be set 00:23:21.912 [2024-07-15 21:39:11.590911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2794dd0 (9): Bad file descriptor 00:23:21.912 [2024-07-15 21:39:11.590951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:21.912 [2024-07-15 21:39:11.590958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:21.912 [2024-07-15 21:39:11.590965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:21.912 [2024-07-15 21:39:11.591005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.912 [2024-07-15 21:39:11.591877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2761e50 (9): Bad file descriptor 00:23:21.912 [2024-07-15 21:39:11.591903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x278bd90 (9): Bad file descriptor 00:23:21.912 [2024-07-15 21:39:11.591924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x278bbb0 (9): Bad file descriptor 00:23:21.912 [2024-07-15 21:39:11.591947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2793b80 (9): Bad file descriptor 00:23:21.912 [2024-07-15 21:39:11.592048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.912 [2024-07-15 21:39:11.592673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.912 [2024-07-15 21:39:11.592683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.592985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.592994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.593001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.593010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.593018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.593028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.593036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.593045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.593053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.593062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.593069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.593079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.593086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.593097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.593104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.593113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.593120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.593133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x285e5d0 is same with the state(5) to be set 00:23:21.913 [2024-07-15 21:39:11.594415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.594428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.594441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.594450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.594460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.594469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.594481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.594490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.594500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.594509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.594518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.594525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.594535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.594542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.913 [2024-07-15 21:39:11.594551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.913 [2024-07-15 21:39:11.594558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.594988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.594997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.595004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.595014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.595022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.595031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.595038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.595047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.595054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.595063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.595070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.595079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.595086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.595095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.595102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.595111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.595118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.595132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.595140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.595149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.595156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.595164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.595173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.595182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.595189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.595198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.595205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.595215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.595224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.595233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.595240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.595249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.914 [2024-07-15 21:39:11.595256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.914 [2024-07-15 21:39:11.595265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.595272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.595282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.595289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.595298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.595305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.595315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.595322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.595331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.595338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.595347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.595354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.595363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.595370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.595380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.595387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.595396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.595403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.595412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.595419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.595430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.595438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.595447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.595454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.595463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.595470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.595479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.595487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.595495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x285faa0 is same with the state(5) to be set 00:23:21.915 [2024-07-15 21:39:11.596787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.596801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.596813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.596822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.596834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.596842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.596854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.596862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.596872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.596879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.596888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.596895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.596905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.596912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.596921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.596929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.596941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.596948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.596957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.596965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.596974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.596981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.596990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.596998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.597007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.597015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.597025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.597032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.597041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.597048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.597058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.597065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.597074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.597082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.597091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.597098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.597107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.597115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.597128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.597135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.597145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.597154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.597164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.597171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.597180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.597187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.597197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.597204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.597213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.597220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.597229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.915 [2024-07-15 21:39:11.597236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.915 [2024-07-15 21:39:11.597245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.916 [2024-07-15 21:39:11.597867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.916 [2024-07-15 21:39:11.597875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2874760 is same with the state(5) to be set 00:23:21.916 [2024-07-15 21:39:11.599392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:21.916 [2024-07-15 21:39:11.599414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:21.916 [2024-07-15 21:39:11.599424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:21.916 [2024-07-15 21:39:11.599957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.916 [2024-07-15 21:39:11.599973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2893480 with addr=10.0.0.2, port=4420 00:23:21.916 [2024-07-15 21:39:11.599982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2893480 is same with the state(5) to be set 00:23:21.916 [2024-07-15 21:39:11.600357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.916 [2024-07-15 21:39:11.600396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26d6480 with addr=10.0.0.2, port=4420 00:23:21.916 [2024-07-15 21:39:11.600407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d6480 is same with the state(5) to be set 00:23:21.916 [2024-07-15 21:39:11.600840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.916 [2024-07-15 21:39:11.600851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x270df00 with addr=10.0.0.2, port=4420 00:23:21.917 [2024-07-15 21:39:11.600858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x270df00 is same with the state(5) to be set 00:23:21.917 [2024-07-15 21:39:11.601690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:21.917 [2024-07-15 21:39:11.601705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:21.917 [2024-07-15 21:39:11.601731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2893480 (9): Bad file descriptor 00:23:21.917 [2024-07-15 21:39:11.601746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26d6480 (9): Bad file descriptor 00:23:21.917 [2024-07-15 21:39:11.601755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x270df00 (9): Bad file descriptor 00:23:21.917 [2024-07-15 21:39:11.602343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.917 [2024-07-15 21:39:11.602381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26f5fb0 with addr=10.0.0.2, port=4420 00:23:21.917 [2024-07-15 21:39:11.602391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f5fb0 is same with the state(5) to be set 00:23:21.917 [2024-07-15 21:39:11.602837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.917 [2024-07-15 21:39:11.602848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26c7770 with addr=10.0.0.2, port=4420 00:23:21.917 [2024-07-15 21:39:11.602855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26c7770 is same with the state(5) to be set 00:23:21.917 [2024-07-15 21:39:11.602863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:21.917 [2024-07-15 21:39:11.602870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:21.917 [2024-07-15 21:39:11.602878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:21.917 [2024-07-15 21:39:11.602893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:21.917 [2024-07-15 21:39:11.602899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:21.917 [2024-07-15 21:39:11.602905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:21.917 [2024-07-15 21:39:11.602916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:21.917 [2024-07-15 21:39:11.602922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:21.917 [2024-07-15 21:39:11.602929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:21.917 [2024-07-15 21:39:11.602987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:21.917 [2024-07-15 21:39:11.603000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.917 [2024-07-15 21:39:11.603007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.917 [2024-07-15 21:39:11.603012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.917 [2024-07-15 21:39:11.603031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26f5fb0 (9): Bad file descriptor 00:23:21.917 [2024-07-15 21:39:11.603041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26c7770 (9): Bad file descriptor 00:23:21.917 [2024-07-15 21:39:11.603562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.917 [2024-07-15 21:39:11.603576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2794dd0 with addr=10.0.0.2, port=4420 00:23:21.917 [2024-07-15 21:39:11.603583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2794dd0 is same with the state(5) to be set 00:23:21.917 [2024-07-15 21:39:11.603590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:21.917 [2024-07-15 21:39:11.603596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:21.917 [2024-07-15 21:39:11.603603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:21.917 [2024-07-15 21:39:11.603613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:21.917 [2024-07-15 21:39:11.603623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:21.917 [2024-07-15 21:39:11.603630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:21.917 [2024-07-15 21:39:11.603672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.603988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.917 [2024-07-15 21:39:11.603998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.917 [2024-07-15 21:39:11.604005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.918 [2024-07-15 21:39:11.604715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.918 [2024-07-15 21:39:11.604726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.604733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.604742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.604749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.604757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26c2710 is same with the state(5) to be set 00:23:21.919 [2024-07-15 21:39:11.606044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.919 [2024-07-15 21:39:11.606749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.919 [2024-07-15 21:39:11.606758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.606765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.606774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.606782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.606791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.606798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.606807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.606814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.606824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.606831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.606840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.606848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.606856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.606864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.606873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.606882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.606891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.606899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.606908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.606916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.606925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.606932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.606942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.606949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.606958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.606966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.606975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.606982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.606991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.606999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.607008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.607015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.607024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.607031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.607040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.607048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.607057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.607064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.607073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.607080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.607091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.607099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.607108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.607115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.607129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.607136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.607145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.607152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.607160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2e4d9a0 is same with the state(5) to be set 00:23:21.920 [2024-07-15 21:39:11.608442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.920 [2024-07-15 21:39:11.608764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.920 [2024-07-15 21:39:11.608771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.608781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.608788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.608797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.608806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.608816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.608823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.608832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.608839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.608849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.608856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.608866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.608873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.608882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.608889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.608898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.608906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.608915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.608922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.608931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.608939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.608948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.608955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.608964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.608971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.608981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.608988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.608997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.921 [2024-07-15 21:39:11.609370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.921 [2024-07-15 21:39:11.609377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.609387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.609394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.609403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.609410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.609419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.609427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.609436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.609445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.609454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.609461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.609470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.609478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.609487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.609494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.609504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.609511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.609520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.609528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.609535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2ff5210 is same with the state(5) to be set 00:23:21.922 [2024-07-15 21:39:11.610801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.610816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.610828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.610837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.610849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.610857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.610869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.610876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.610886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.610893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.610902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.610910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.610920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.610929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.610939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.610946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.610956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.610962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.610972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.610979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.610988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.610995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.922 [2024-07-15 21:39:11.611369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.922 [2024-07-15 21:39:11.611376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.923 [2024-07-15 21:39:11.611893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.923 [2024-07-15 21:39:11.611901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2873290 is same with the state(5) to be set 00:23:21.923 [2024-07-15 21:39:11.613757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.923 [2024-07-15 21:39:11.613776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.923 [2024-07-15 21:39:11.613785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:21.923 [2024-07-15 21:39:11.613795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:21.923 [2024-07-15 21:39:11.613804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:21.923 [2024-07-15 21:39:11.613837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2794dd0 (9): Bad file descriptor 00:23:21.923 [2024-07-15 21:39:11.613895] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.923 [2024-07-15 21:39:11.613912] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.923 task offset: 24576 on job bdev=Nvme1n1 fails 00:23:21.923 00:23:21.923 Latency(us) 00:23:21.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.923 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.923 Job: Nvme1n1 ended in about 0.93 seconds with error 00:23:21.923 Verification LBA range: start 0x0 length 0x400 00:23:21.923 Nvme1n1 : 0.93 206.70 12.92 68.90 0.00 229570.56 21408.43 244667.73 00:23:21.923 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.923 Job: Nvme2n1 ended in about 0.94 seconds with error 00:23:21.923 Verification LBA range: start 0x0 length 0x400 00:23:21.923 Nvme2n1 : 0.94 204.34 12.77 68.11 0.00 227533.01 22500.69 221948.59 00:23:21.923 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.923 Job: Nvme3n1 ended in about 0.94 seconds with error 00:23:21.923 Verification LBA range: start 0x0 length 0x400 00:23:21.923 Nvme3n1 : 0.94 135.88 8.49 67.94 0.00 298009.03 41287.68 228939.09 00:23:21.923 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.923 Job: Nvme4n1 ended in about 0.93 seconds with error 00:23:21.923 Verification LBA range: start 0x0 length 0x400 00:23:21.923 Nvme4n1 : 0.93 206.41 12.90 68.80 0.00 215694.72 21189.97 249910.61 00:23:21.923 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.923 Job: Nvme5n1 ended in about 0.95 seconds with error 00:23:21.923 Verification LBA range: start 0x0 length 0x400 00:23:21.923 Nvme5n1 : 0.95 201.84 12.62 67.28 0.00 216215.25 20753.07 221948.59 00:23:21.923 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.923 Job: Nvme6n1 ended in about 0.95 seconds with error 00:23:21.923 Verification LBA range: start 0x0 length 0x400 00:23:21.923 Nvme6n1 : 0.95 134.23 8.39 67.11 0.00 282865.49 24685.23 270882.13 00:23:21.923 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.923 Job: Nvme7n1 ended in about 0.96 seconds with error 00:23:21.924 Verification LBA range: start 0x0 length 0x400 00:23:21.924 Nvme7n1 : 0.96 133.89 8.37 66.95 0.00 277398.19 22282.24 286610.77 00:23:21.924 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.924 Job: Nvme8n1 ended in about 0.93 seconds with error 00:23:21.924 Verification LBA range: start 0x0 length 0x400 00:23:21.924 Nvme8n1 : 0.93 137.19 8.57 68.59 0.00 263436.87 5106.35 312825.17 00:23:21.924 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.924 Job: Nvme9n1 ended in about 0.96 seconds with error 00:23:21.924 Verification LBA range: start 0x0 length 0x400 00:23:21.924 Nvme9n1 : 0.96 133.56 8.35 66.78 0.00 265738.52 23156.05 241172.48 00:23:21.924 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.924 Job: Nvme10n1 ended in about 0.94 seconds with error 00:23:21.924 Verification LBA range: start 0x0 length 0x400 00:23:21.924 Nvme10n1 : 0.94 203.31 12.71 67.77 0.00 190991.15 22391.47 213210.45 00:23:21.924 =================================================================================================================== 00:23:21.924 Total : 1697.36 106.08 678.24 0.00 242353.23 5106.35 312825.17 00:23:21.924 [2024-07-15 21:39:11.642790] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:21.924 [2024-07-15 21:39:11.642824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:21.924 [2024-07-15 21:39:11.643386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.924 [2024-07-15 21:39:11.643402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2793b80 with addr=10.0.0.2, port=4420 00:23:21.924 [2024-07-15 21:39:11.643411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2793b80 is same with the state(5) to be set 00:23:21.924 [2024-07-15 21:39:11.643710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.924 [2024-07-15 21:39:11.643719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x278bd90 with addr=10.0.0.2, port=4420 00:23:21.924 [2024-07-15 21:39:11.643726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x278bd90 is same with the state(5) to be set 00:23:21.924 [2024-07-15 21:39:11.644156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.924 [2024-07-15 21:39:11.644171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x278bbb0 with addr=10.0.0.2, port=4420 00:23:21.924 [2024-07-15 21:39:11.644179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x278bbb0 is same with the state(5) to be set 00:23:21.924 [2024-07-15 21:39:11.644186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:21.924 [2024-07-15 21:39:11.644193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:21.924 [2024-07-15 21:39:11.644200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:21.924 [2024-07-15 21:39:11.645294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:21.924 [2024-07-15 21:39:11.645307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:21.924 [2024-07-15 21:39:11.645315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:21.924 [2024-07-15 21:39:11.645325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:21.924 [2024-07-15 21:39:11.645335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:21.924 [2024-07-15 21:39:11.645343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.924 [2024-07-15 21:39:11.645854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.924 [2024-07-15 21:39:11.645866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2761e50 with addr=10.0.0.2, port=4420 00:23:21.924 [2024-07-15 21:39:11.645874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2761e50 is same with the state(5) to be set 00:23:21.924 [2024-07-15 21:39:11.645885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2793b80 (9): Bad file descriptor 00:23:21.924 [2024-07-15 21:39:11.645895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x278bd90 (9): Bad file descriptor 00:23:21.924 [2024-07-15 21:39:11.645903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x278bbb0 (9): Bad file descriptor 00:23:21.924 [2024-07-15 21:39:11.645938] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.924 [2024-07-15 21:39:11.645949] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.924 [2024-07-15 21:39:11.645959] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.924 [2024-07-15 21:39:11.646479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.924 [2024-07-15 21:39:11.646492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x270df00 with addr=10.0.0.2, port=4420 00:23:21.924 [2024-07-15 21:39:11.646499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x270df00 is same with the state(5) to be set 00:23:21.924 [2024-07-15 21:39:11.646947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.924 [2024-07-15 21:39:11.646956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26d6480 with addr=10.0.0.2, port=4420 00:23:21.924 [2024-07-15 21:39:11.646963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d6480 is same with the state(5) to be set 00:23:21.924 [2024-07-15 21:39:11.647367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.924 [2024-07-15 21:39:11.647377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2893480 with addr=10.0.0.2, port=4420 00:23:21.924 [2024-07-15 21:39:11.647383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2893480 is same with the state(5) to be set 00:23:21.924 [2024-07-15 21:39:11.647800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.924 [2024-07-15 21:39:11.647812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26c7770 with addr=10.0.0.2, port=4420 00:23:21.924 [2024-07-15 21:39:11.647819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26c7770 is same with the state(5) to be set 00:23:21.924 [2024-07-15 21:39:11.648260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.924 [2024-07-15 21:39:11.648270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26f5fb0 with addr=10.0.0.2, port=4420 00:23:21.924 [2024-07-15 21:39:11.648276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f5fb0 is same with the state(5) to be set 00:23:21.924 [2024-07-15 21:39:11.648285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2761e50 (9): Bad file descriptor 00:23:21.924 [2024-07-15 21:39:11.648293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:21.924 [2024-07-15 21:39:11.648299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:21.924 [2024-07-15 21:39:11.648306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:21.924 [2024-07-15 21:39:11.648316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:21.924 [2024-07-15 21:39:11.648323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:21.924 [2024-07-15 21:39:11.648329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:21.924 [2024-07-15 21:39:11.648339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:21.924 [2024-07-15 21:39:11.648345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:21.924 [2024-07-15 21:39:11.648352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:21.924 [2024-07-15 21:39:11.648413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:21.924 [2024-07-15 21:39:11.648423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.924 [2024-07-15 21:39:11.648430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.924 [2024-07-15 21:39:11.648435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.924 [2024-07-15 21:39:11.648449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x270df00 (9): Bad file descriptor 00:23:21.924 [2024-07-15 21:39:11.648458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26d6480 (9): Bad file descriptor 00:23:21.924 [2024-07-15 21:39:11.648467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2893480 (9): Bad file descriptor 00:23:21.924 [2024-07-15 21:39:11.648476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26c7770 (9): Bad file descriptor 00:23:21.924 [2024-07-15 21:39:11.648484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26f5fb0 (9): Bad file descriptor 00:23:21.924 [2024-07-15 21:39:11.648492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:21.924 [2024-07-15 21:39:11.648498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:21.924 [2024-07-15 21:39:11.648505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:21.924 [2024-07-15 21:39:11.648535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.924 [2024-07-15 21:39:11.648959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.924 [2024-07-15 21:39:11.648969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2794dd0 with addr=10.0.0.2, port=4420 00:23:21.924 [2024-07-15 21:39:11.648980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2794dd0 is same with the state(5) to be set 00:23:21.924 [2024-07-15 21:39:11.648988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:21.924 [2024-07-15 21:39:11.648994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:21.924 [2024-07-15 21:39:11.649001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:21.924 [2024-07-15 21:39:11.649011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:21.924 [2024-07-15 21:39:11.649017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:21.924 [2024-07-15 21:39:11.649024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:21.924 [2024-07-15 21:39:11.649033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:21.924 [2024-07-15 21:39:11.649039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:21.924 [2024-07-15 21:39:11.649046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:21.924 [2024-07-15 21:39:11.649055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:21.924 [2024-07-15 21:39:11.649061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:21.924 [2024-07-15 21:39:11.649067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:21.924 [2024-07-15 21:39:11.649077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:21.924 [2024-07-15 21:39:11.649084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:21.924 [2024-07-15 21:39:11.649090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:21.924 [2024-07-15 21:39:11.649118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.924 [2024-07-15 21:39:11.649129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.924 [2024-07-15 21:39:11.649135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.925 [2024-07-15 21:39:11.649141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.925 [2024-07-15 21:39:11.649147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.925 [2024-07-15 21:39:11.649155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2794dd0 (9): Bad file descriptor 00:23:21.925 [2024-07-15 21:39:11.649181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:21.925 [2024-07-15 21:39:11.649188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:21.925 [2024-07-15 21:39:11.649195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:21.925 [2024-07-15 21:39:11.649223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.187 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:22.187 21:39:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2255046 00:23:23.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2255046) - No such process 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:23.128 rmmod nvme_tcp 00:23:23.128 rmmod nvme_fabrics 00:23:23.128 rmmod nvme_keyring 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.128 21:39:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.672 21:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:25.672 00:23:25.672 real 0m7.576s 00:23:25.672 user 0m17.880s 00:23:25.672 sys 0m1.247s 00:23:25.672 21:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:25.672 21:39:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:25.672 ************************************ 00:23:25.672 END TEST nvmf_shutdown_tc3 00:23:25.672 ************************************ 00:23:25.672 21:39:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:25.672 21:39:15 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:25.672 00:23:25.672 real 0m32.296s 00:23:25.672 user 1m15.767s 00:23:25.672 sys 0m9.192s 00:23:25.672 21:39:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:25.672 21:39:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:25.672 ************************************ 00:23:25.672 END TEST nvmf_shutdown 00:23:25.672 ************************************ 00:23:25.672 21:39:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:25.672 21:39:15 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:25.672 21:39:15 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:25.672 21:39:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.672 21:39:15 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:25.672 21:39:15 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.672 21:39:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.672 21:39:15 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:25.672 21:39:15 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:25.672 21:39:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:25.672 21:39:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.672 21:39:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.672 ************************************ 00:23:25.672 START TEST nvmf_multicontroller 00:23:25.672 ************************************ 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:25.672 * Looking for test storage... 00:23:25.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.672 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:25.673 21:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.264 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:32.265 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:32.265 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:32.265 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:32.265 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.265 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.526 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.526 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.526 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:32.526 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.526 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.526 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:32.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:23:32.786 00:23:32.786 --- 10.0.0.2 ping statistics --- 00:23:32.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.786 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:23:32.786 00:23:32.786 --- 10.0.0.1 ping statistics --- 00:23:32.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.786 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2260272 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2260272 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2260272 ']' 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:32.786 21:39:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.786 [2024-07-15 21:39:22.476955] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:32.787 [2024-07-15 21:39:22.477040] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.787 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.787 [2024-07-15 21:39:22.566669] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:33.047 [2024-07-15 21:39:22.657617] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.047 [2024-07-15 21:39:22.657665] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.047 [2024-07-15 21:39:22.657676] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.047 [2024-07-15 21:39:22.657684] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.047 [2024-07-15 21:39:22.657689] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.047 [2024-07-15 21:39:22.657806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.047 [2024-07-15 21:39:22.657968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.047 [2024-07-15 21:39:22.657969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.616 [2024-07-15 21:39:23.286975] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.616 Malloc0 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.616 [2024-07-15 21:39:23.352507] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.616 [2024-07-15 21:39:23.364469] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.616 Malloc1 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.616 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.876 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.876 21:39:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:33.876 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.876 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.876 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.876 21:39:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2260502 00:23:33.876 21:39:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:33.876 21:39:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:33.876 21:39:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2260502 /var/tmp/bdevperf.sock 00:23:33.876 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2260502 ']' 00:23:33.876 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.876 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.876 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.876 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.876 21:39:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.818 NVMe0n1 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.818 1 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.818 request: 00:23:34.818 { 00:23:34.818 "name": "NVMe0", 00:23:34.818 "trtype": "tcp", 00:23:34.818 "traddr": "10.0.0.2", 00:23:34.818 "adrfam": "ipv4", 00:23:34.818 "trsvcid": "4420", 00:23:34.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.818 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:34.818 "hostaddr": "10.0.0.2", 00:23:34.818 "hostsvcid": "60000", 00:23:34.818 "prchk_reftag": false, 00:23:34.818 "prchk_guard": false, 00:23:34.818 "hdgst": false, 00:23:34.818 "ddgst": false, 00:23:34.818 "method": "bdev_nvme_attach_controller", 00:23:34.818 "req_id": 1 00:23:34.818 } 00:23:34.818 Got JSON-RPC error response 00:23:34.818 response: 00:23:34.818 { 00:23:34.818 "code": -114, 00:23:34.818 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:34.818 } 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.818 request: 00:23:34.818 { 00:23:34.818 "name": "NVMe0", 00:23:34.818 "trtype": "tcp", 00:23:34.818 "traddr": "10.0.0.2", 00:23:34.818 "adrfam": "ipv4", 00:23:34.818 "trsvcid": "4420", 00:23:34.818 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:34.818 "hostaddr": "10.0.0.2", 00:23:34.818 "hostsvcid": "60000", 00:23:34.818 "prchk_reftag": false, 00:23:34.818 "prchk_guard": false, 00:23:34.818 "hdgst": false, 00:23:34.818 "ddgst": false, 00:23:34.818 "method": "bdev_nvme_attach_controller", 00:23:34.818 "req_id": 1 00:23:34.818 } 00:23:34.818 Got JSON-RPC error response 00:23:34.818 response: 00:23:34.818 { 00:23:34.818 "code": -114, 00:23:34.818 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:34.818 } 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.818 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.819 request: 00:23:34.819 { 00:23:34.819 "name": "NVMe0", 00:23:34.819 "trtype": "tcp", 00:23:34.819 "traddr": "10.0.0.2", 00:23:34.819 "adrfam": "ipv4", 00:23:34.819 "trsvcid": "4420", 00:23:34.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.819 "hostaddr": "10.0.0.2", 00:23:34.819 "hostsvcid": "60000", 00:23:34.819 "prchk_reftag": false, 00:23:34.819 "prchk_guard": false, 00:23:34.819 "hdgst": false, 00:23:34.819 "ddgst": false, 00:23:34.819 "multipath": "disable", 00:23:34.819 "method": "bdev_nvme_attach_controller", 00:23:34.819 "req_id": 1 00:23:34.819 } 00:23:34.819 Got JSON-RPC error response 00:23:34.819 response: 00:23:34.819 { 00:23:34.819 "code": -114, 00:23:34.819 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:34.819 } 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.819 request: 00:23:34.819 { 00:23:34.819 "name": "NVMe0", 00:23:34.819 "trtype": "tcp", 00:23:34.819 "traddr": "10.0.0.2", 00:23:34.819 "adrfam": "ipv4", 00:23:34.819 "trsvcid": "4420", 00:23:34.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.819 "hostaddr": "10.0.0.2", 00:23:34.819 "hostsvcid": "60000", 00:23:34.819 "prchk_reftag": false, 00:23:34.819 "prchk_guard": false, 00:23:34.819 "hdgst": false, 00:23:34.819 "ddgst": false, 00:23:34.819 "multipath": "failover", 00:23:34.819 "method": "bdev_nvme_attach_controller", 00:23:34.819 "req_id": 1 00:23:34.819 } 00:23:34.819 Got JSON-RPC error response 00:23:34.819 response: 00:23:34.819 { 00:23:34.819 "code": -114, 00:23:34.819 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:34.819 } 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.819 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.080 00:23:35.080 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.080 21:39:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:35.080 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.080 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.080 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.080 21:39:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:35.080 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.080 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.080 00:23:35.080 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.080 21:39:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:35.081 21:39:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:35.081 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.081 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.342 21:39:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.342 21:39:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:35.342 21:39:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:36.288 0 00:23:36.288 21:39:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:36.288 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.288 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.288 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.288 21:39:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2260502 00:23:36.288 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2260502 ']' 00:23:36.288 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2260502 00:23:36.288 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:36.288 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.288 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2260502 00:23:36.288 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:36.288 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:36.288 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2260502' 00:23:36.288 killing process with pid 2260502 00:23:36.288 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2260502 00:23:36.288 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2260502 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:36.552 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:36.552 [2024-07-15 21:39:23.481794] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:36.552 [2024-07-15 21:39:23.481856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2260502 ] 00:23:36.552 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.552 [2024-07-15 21:39:23.540497] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.552 [2024-07-15 21:39:23.605162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.552 [2024-07-15 21:39:24.867162] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name e10eb5b5-4f09-4c96-90e5-f877634e93c3 already exists 00:23:36.552 [2024-07-15 21:39:24.867193] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:e10eb5b5-4f09-4c96-90e5-f877634e93c3 alias for bdev NVMe1n1 00:23:36.552 [2024-07-15 21:39:24.867202] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:36.552 Running I/O for 1 seconds... 00:23:36.552 00:23:36.552 Latency(us) 00:23:36.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.552 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:36.552 NVMe0n1 : 1.00 27971.53 109.26 0.00 0.00 4561.48 3959.47 10649.60 00:23:36.552 =================================================================================================================== 00:23:36.552 Total : 27971.53 109.26 0.00 0.00 4561.48 3959.47 10649.60 00:23:36.552 Received shutdown signal, test time was about 1.000000 seconds 00:23:36.552 00:23:36.552 Latency(us) 00:23:36.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.552 =================================================================================================================== 00:23:36.552 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.552 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.552 rmmod nvme_tcp 00:23:36.552 rmmod nvme_fabrics 00:23:36.552 rmmod nvme_keyring 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:36.552 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2260272 ']' 00:23:36.553 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2260272 00:23:36.553 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2260272 ']' 00:23:36.553 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2260272 00:23:36.553 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:36.553 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.553 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2260272 00:23:36.880 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:36.880 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:36.880 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2260272' 00:23:36.880 killing process with pid 2260272 00:23:36.880 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2260272 00:23:36.880 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2260272 00:23:36.880 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.880 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.880 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.880 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.880 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.880 21:39:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.880 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.880 21:39:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.795 21:39:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:38.795 00:23:38.795 real 0m13.433s 00:23:38.795 user 0m16.554s 00:23:38.795 sys 0m6.096s 00:23:38.795 21:39:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:38.795 21:39:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:38.795 ************************************ 00:23:38.795 END TEST nvmf_multicontroller 00:23:38.795 ************************************ 00:23:39.056 21:39:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:39.056 21:39:28 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:39.056 21:39:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:39.056 21:39:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:39.056 21:39:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.056 ************************************ 00:23:39.056 START TEST nvmf_aer 00:23:39.056 ************************************ 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:39.056 * Looking for test storage... 00:23:39.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.056 21:39:28 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.057 21:39:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.646 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.646 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:45.646 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:45.646 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:45.646 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:45.646 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:45.646 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:45.647 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:45.647 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:45.647 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:45.908 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:45.908 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:45.908 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.908 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:45.909 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:45.909 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:46.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:23:46.170 00:23:46.170 --- 10.0.0.2 ping statistics --- 00:23:46.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.170 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.374 ms 00:23:46.170 00:23:46.170 --- 10.0.0.1 ping statistics --- 00:23:46.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.170 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2265202 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2265202 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2265202 ']' 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.170 21:39:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:46.170 [2024-07-15 21:39:35.872263] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:46.170 [2024-07-15 21:39:35.872329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.170 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.170 [2024-07-15 21:39:35.943358] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:46.431 [2024-07-15 21:39:36.020808] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.431 [2024-07-15 21:39:36.020846] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.431 [2024-07-15 21:39:36.020854] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.431 [2024-07-15 21:39:36.020861] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.431 [2024-07-15 21:39:36.020867] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.431 [2024-07-15 21:39:36.021427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.431 [2024-07-15 21:39:36.021509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.431 [2024-07-15 21:39:36.021666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.431 [2024-07-15 21:39:36.021666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.002 [2024-07-15 21:39:36.703832] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.002 Malloc0 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.002 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.003 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.003 [2024-07-15 21:39:36.763265] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.003 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.003 21:39:36 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:47.003 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.003 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.003 [ 00:23:47.003 { 00:23:47.003 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:47.003 "subtype": "Discovery", 00:23:47.003 "listen_addresses": [], 00:23:47.003 "allow_any_host": true, 00:23:47.003 "hosts": [] 00:23:47.003 }, 00:23:47.003 { 00:23:47.003 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.003 "subtype": "NVMe", 00:23:47.003 "listen_addresses": [ 00:23:47.003 { 00:23:47.003 "trtype": "TCP", 00:23:47.003 "adrfam": "IPv4", 00:23:47.003 "traddr": "10.0.0.2", 00:23:47.003 "trsvcid": "4420" 00:23:47.003 } 00:23:47.003 ], 00:23:47.003 "allow_any_host": true, 00:23:47.003 "hosts": [], 00:23:47.003 "serial_number": "SPDK00000000000001", 00:23:47.003 "model_number": "SPDK bdev Controller", 00:23:47.003 "max_namespaces": 2, 00:23:47.003 "min_cntlid": 1, 00:23:47.003 "max_cntlid": 65519, 00:23:47.003 "namespaces": [ 00:23:47.003 { 00:23:47.003 "nsid": 1, 00:23:47.003 "bdev_name": "Malloc0", 00:23:47.003 "name": "Malloc0", 00:23:47.003 "nguid": "987202BA41F04BABA21EEF6DB2F91A34", 00:23:47.003 "uuid": "987202ba-41f0-4bab-a21e-ef6db2f91a34" 00:23:47.003 } 00:23:47.003 ] 00:23:47.003 } 00:23:47.003 ] 00:23:47.003 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.003 21:39:36 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:47.003 21:39:36 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:47.003 21:39:36 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:47.003 21:39:36 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2265343 00:23:47.003 21:39:36 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:47.003 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:47.003 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:47.003 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:47.003 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:47.003 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:47.264 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.264 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:47.264 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:47.264 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:47.264 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:47.264 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:47.264 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:47.264 21:39:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:47.264 21:39:37 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:47.264 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.264 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.264 Malloc1 00:23:47.264 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.264 21:39:37 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:47.264 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.264 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.264 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.264 21:39:37 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:47.264 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.264 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.264 [ 00:23:47.264 { 00:23:47.264 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:47.264 "subtype": "Discovery", 00:23:47.264 "listen_addresses": [], 00:23:47.264 "allow_any_host": true, 00:23:47.264 "hosts": [] 00:23:47.264 }, 00:23:47.264 { 00:23:47.264 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.264 "subtype": "NVMe", 00:23:47.264 "listen_addresses": [ 00:23:47.264 { 00:23:47.264 "trtype": "TCP", 00:23:47.264 "adrfam": "IPv4", 00:23:47.264 "traddr": "10.0.0.2", 00:23:47.264 "trsvcid": "4420" 00:23:47.264 } 00:23:47.264 ], 00:23:47.264 "allow_any_host": true, 00:23:47.264 "hosts": [], 00:23:47.264 "serial_number": "SPDK00000000000001", 00:23:47.264 "model_number": "SPDK bdev Controller", 00:23:47.264 "max_namespaces": 2, 00:23:47.264 "min_cntlid": 1, 00:23:47.264 "max_cntlid": 65519, 00:23:47.264 "namespaces": [ 00:23:47.264 { 00:23:47.264 "nsid": 1, 00:23:47.264 "bdev_name": "Malloc0", 00:23:47.264 "name": "Malloc0", 00:23:47.264 "nguid": "987202BA41F04BABA21EEF6DB2F91A34", 00:23:47.264 "uuid": "987202ba-41f0-4bab-a21e-ef6db2f91a34" 00:23:47.264 }, 00:23:47.264 { 00:23:47.264 "nsid": 2, 00:23:47.264 "bdev_name": "Malloc1", 00:23:47.264 "name": "Malloc1", 00:23:47.264 "nguid": "3F9D0EEBDF574923AB084EF7ACA790F7", 00:23:47.264 "uuid": "3f9d0eeb-df57-4923-ab08-4ef7aca790f7" 00:23:47.264 } 00:23:47.264 Asynchronous Event Request test 00:23:47.264 Attaching to 10.0.0.2 00:23:47.264 Attached to 10.0.0.2 00:23:47.264 Registering asynchronous event callbacks... 00:23:47.264 Starting namespace attribute notice tests for all controllers... 00:23:47.264 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:47.264 aer_cb - Changed Namespace 00:23:47.264 Cleaning up... 00:23:47.264 ] 00:23:47.264 } 00:23:47.264 ] 00:23:47.264 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.264 21:39:37 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2265343 00:23:47.264 21:39:37 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:47.264 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.264 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:47.525 rmmod nvme_tcp 00:23:47.525 rmmod nvme_fabrics 00:23:47.525 rmmod nvme_keyring 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2265202 ']' 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2265202 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2265202 ']' 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2265202 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2265202 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2265202' 00:23:47.525 killing process with pid 2265202 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2265202 00:23:47.525 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2265202 00:23:47.785 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:47.785 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:47.785 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:47.785 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:47.785 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:47.785 21:39:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.785 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.785 21:39:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.695 21:39:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:49.695 00:23:49.695 real 0m10.786s 00:23:49.695 user 0m7.410s 00:23:49.695 sys 0m5.654s 00:23:49.695 21:39:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:49.695 21:39:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.695 ************************************ 00:23:49.695 END TEST nvmf_aer 00:23:49.695 ************************************ 00:23:49.695 21:39:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:49.695 21:39:39 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:49.695 21:39:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:49.695 21:39:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:49.695 21:39:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:49.956 ************************************ 00:23:49.956 START TEST nvmf_async_init 00:23:49.956 ************************************ 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:49.956 * Looking for test storage... 00:23:49.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3a737059bf08435c84a5c4c1fc92ba7d 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:49.956 21:39:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:58.091 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:58.091 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:58.091 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:58.091 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:58.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:23:58.091 00:23:58.091 --- 10.0.0.2 ping statistics --- 00:23:58.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.091 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:58.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:23:58.091 00:23:58.091 --- 10.0.0.1 ping statistics --- 00:23:58.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.091 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2269651 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2269651 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2269651 ']' 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.091 21:39:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.092 21:39:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.092 21:39:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.092 21:39:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.092 [2024-07-15 21:39:46.957539] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:23:58.092 [2024-07-15 21:39:46.957604] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.092 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.092 [2024-07-15 21:39:47.026836] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.092 [2024-07-15 21:39:47.100159] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.092 [2024-07-15 21:39:47.100196] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.092 [2024-07-15 21:39:47.100203] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.092 [2024-07-15 21:39:47.100210] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.092 [2024-07-15 21:39:47.100215] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.092 [2024-07-15 21:39:47.100233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.092 [2024-07-15 21:39:47.766661] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.092 null0 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3a737059bf08435c84a5c4c1fc92ba7d 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.092 [2024-07-15 21:39:47.822903] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.092 21:39:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.353 nvme0n1 00:23:58.353 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.353 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:58.353 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.353 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.353 [ 00:23:58.353 { 00:23:58.353 "name": "nvme0n1", 00:23:58.353 "aliases": [ 00:23:58.353 "3a737059-bf08-435c-84a5-c4c1fc92ba7d" 00:23:58.353 ], 00:23:58.353 "product_name": "NVMe disk", 00:23:58.353 "block_size": 512, 00:23:58.353 "num_blocks": 2097152, 00:23:58.353 "uuid": "3a737059-bf08-435c-84a5-c4c1fc92ba7d", 00:23:58.353 "assigned_rate_limits": { 00:23:58.353 "rw_ios_per_sec": 0, 00:23:58.353 "rw_mbytes_per_sec": 0, 00:23:58.353 "r_mbytes_per_sec": 0, 00:23:58.353 "w_mbytes_per_sec": 0 00:23:58.353 }, 00:23:58.353 "claimed": false, 00:23:58.353 "zoned": false, 00:23:58.353 "supported_io_types": { 00:23:58.353 "read": true, 00:23:58.353 "write": true, 00:23:58.353 "unmap": false, 00:23:58.353 "flush": true, 00:23:58.353 "reset": true, 00:23:58.353 "nvme_admin": true, 00:23:58.353 "nvme_io": true, 00:23:58.353 "nvme_io_md": false, 00:23:58.353 "write_zeroes": true, 00:23:58.353 "zcopy": false, 00:23:58.353 "get_zone_info": false, 00:23:58.353 "zone_management": false, 00:23:58.353 "zone_append": false, 00:23:58.353 "compare": true, 00:23:58.353 "compare_and_write": true, 00:23:58.353 "abort": true, 00:23:58.353 "seek_hole": false, 00:23:58.353 "seek_data": false, 00:23:58.353 "copy": true, 00:23:58.353 "nvme_iov_md": false 00:23:58.353 }, 00:23:58.353 "memory_domains": [ 00:23:58.353 { 00:23:58.353 "dma_device_id": "system", 00:23:58.353 "dma_device_type": 1 00:23:58.353 } 00:23:58.353 ], 00:23:58.353 "driver_specific": { 00:23:58.353 "nvme": [ 00:23:58.353 { 00:23:58.353 "trid": { 00:23:58.353 "trtype": "TCP", 00:23:58.353 "adrfam": "IPv4", 00:23:58.353 "traddr": "10.0.0.2", 00:23:58.353 "trsvcid": "4420", 00:23:58.353 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:58.353 }, 00:23:58.353 "ctrlr_data": { 00:23:58.353 "cntlid": 1, 00:23:58.353 "vendor_id": "0x8086", 00:23:58.353 "model_number": "SPDK bdev Controller", 00:23:58.353 "serial_number": "00000000000000000000", 00:23:58.353 "firmware_revision": "24.09", 00:23:58.353 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.353 "oacs": { 00:23:58.353 "security": 0, 00:23:58.353 "format": 0, 00:23:58.353 "firmware": 0, 00:23:58.353 "ns_manage": 0 00:23:58.353 }, 00:23:58.353 "multi_ctrlr": true, 00:23:58.353 "ana_reporting": false 00:23:58.353 }, 00:23:58.353 "vs": { 00:23:58.353 "nvme_version": "1.3" 00:23:58.353 }, 00:23:58.353 "ns_data": { 00:23:58.353 "id": 1, 00:23:58.353 "can_share": true 00:23:58.353 } 00:23:58.353 } 00:23:58.353 ], 00:23:58.353 "mp_policy": "active_passive" 00:23:58.353 } 00:23:58.353 } 00:23:58.353 ] 00:23:58.353 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.353 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:58.353 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.353 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.353 [2024-07-15 21:39:48.087755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:58.353 [2024-07-15 21:39:48.087815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d12f0 (9): Bad file descriptor 00:23:58.614 [2024-07-15 21:39:48.230222] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:58.614 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.614 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:58.614 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.614 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.614 [ 00:23:58.614 { 00:23:58.614 "name": "nvme0n1", 00:23:58.614 "aliases": [ 00:23:58.614 "3a737059-bf08-435c-84a5-c4c1fc92ba7d" 00:23:58.614 ], 00:23:58.614 "product_name": "NVMe disk", 00:23:58.614 "block_size": 512, 00:23:58.614 "num_blocks": 2097152, 00:23:58.614 "uuid": "3a737059-bf08-435c-84a5-c4c1fc92ba7d", 00:23:58.614 "assigned_rate_limits": { 00:23:58.614 "rw_ios_per_sec": 0, 00:23:58.614 "rw_mbytes_per_sec": 0, 00:23:58.614 "r_mbytes_per_sec": 0, 00:23:58.614 "w_mbytes_per_sec": 0 00:23:58.614 }, 00:23:58.614 "claimed": false, 00:23:58.614 "zoned": false, 00:23:58.614 "supported_io_types": { 00:23:58.614 "read": true, 00:23:58.614 "write": true, 00:23:58.614 "unmap": false, 00:23:58.614 "flush": true, 00:23:58.614 "reset": true, 00:23:58.614 "nvme_admin": true, 00:23:58.614 "nvme_io": true, 00:23:58.614 "nvme_io_md": false, 00:23:58.614 "write_zeroes": true, 00:23:58.614 "zcopy": false, 00:23:58.614 "get_zone_info": false, 00:23:58.614 "zone_management": false, 00:23:58.614 "zone_append": false, 00:23:58.614 "compare": true, 00:23:58.614 "compare_and_write": true, 00:23:58.615 "abort": true, 00:23:58.615 "seek_hole": false, 00:23:58.615 "seek_data": false, 00:23:58.615 "copy": true, 00:23:58.615 "nvme_iov_md": false 00:23:58.615 }, 00:23:58.615 "memory_domains": [ 00:23:58.615 { 00:23:58.615 "dma_device_id": "system", 00:23:58.615 "dma_device_type": 1 00:23:58.615 } 00:23:58.615 ], 00:23:58.615 "driver_specific": { 00:23:58.615 "nvme": [ 00:23:58.615 { 00:23:58.615 "trid": { 00:23:58.615 "trtype": "TCP", 00:23:58.615 "adrfam": "IPv4", 00:23:58.615 "traddr": "10.0.0.2", 00:23:58.615 "trsvcid": "4420", 00:23:58.615 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:58.615 }, 00:23:58.615 "ctrlr_data": { 00:23:58.615 "cntlid": 2, 00:23:58.615 "vendor_id": "0x8086", 00:23:58.615 "model_number": "SPDK bdev Controller", 00:23:58.615 "serial_number": "00000000000000000000", 00:23:58.615 "firmware_revision": "24.09", 00:23:58.615 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.615 "oacs": { 00:23:58.615 "security": 0, 00:23:58.615 "format": 0, 00:23:58.615 "firmware": 0, 00:23:58.615 "ns_manage": 0 00:23:58.615 }, 00:23:58.615 "multi_ctrlr": true, 00:23:58.615 "ana_reporting": false 00:23:58.615 }, 00:23:58.615 "vs": { 00:23:58.615 "nvme_version": "1.3" 00:23:58.615 }, 00:23:58.615 "ns_data": { 00:23:58.615 "id": 1, 00:23:58.615 "can_share": true 00:23:58.615 } 00:23:58.615 } 00:23:58.615 ], 00:23:58.615 "mp_policy": "active_passive" 00:23:58.615 } 00:23:58.615 } 00:23:58.615 ] 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.65hltyCiFg 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.65hltyCiFg 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.615 [2024-07-15 21:39:48.300423] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.615 [2024-07-15 21:39:48.300543] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.65hltyCiFg 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.615 [2024-07-15 21:39:48.312445] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.65hltyCiFg 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.615 [2024-07-15 21:39:48.324496] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.615 [2024-07-15 21:39:48.324534] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:58.615 nvme0n1 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.615 [ 00:23:58.615 { 00:23:58.615 "name": "nvme0n1", 00:23:58.615 "aliases": [ 00:23:58.615 "3a737059-bf08-435c-84a5-c4c1fc92ba7d" 00:23:58.615 ], 00:23:58.615 "product_name": "NVMe disk", 00:23:58.615 "block_size": 512, 00:23:58.615 "num_blocks": 2097152, 00:23:58.615 "uuid": "3a737059-bf08-435c-84a5-c4c1fc92ba7d", 00:23:58.615 "assigned_rate_limits": { 00:23:58.615 "rw_ios_per_sec": 0, 00:23:58.615 "rw_mbytes_per_sec": 0, 00:23:58.615 "r_mbytes_per_sec": 0, 00:23:58.615 "w_mbytes_per_sec": 0 00:23:58.615 }, 00:23:58.615 "claimed": false, 00:23:58.615 "zoned": false, 00:23:58.615 "supported_io_types": { 00:23:58.615 "read": true, 00:23:58.615 "write": true, 00:23:58.615 "unmap": false, 00:23:58.615 "flush": true, 00:23:58.615 "reset": true, 00:23:58.615 "nvme_admin": true, 00:23:58.615 "nvme_io": true, 00:23:58.615 "nvme_io_md": false, 00:23:58.615 "write_zeroes": true, 00:23:58.615 "zcopy": false, 00:23:58.615 "get_zone_info": false, 00:23:58.615 "zone_management": false, 00:23:58.615 "zone_append": false, 00:23:58.615 "compare": true, 00:23:58.615 "compare_and_write": true, 00:23:58.615 "abort": true, 00:23:58.615 "seek_hole": false, 00:23:58.615 "seek_data": false, 00:23:58.615 "copy": true, 00:23:58.615 "nvme_iov_md": false 00:23:58.615 }, 00:23:58.615 "memory_domains": [ 00:23:58.615 { 00:23:58.615 "dma_device_id": "system", 00:23:58.615 "dma_device_type": 1 00:23:58.615 } 00:23:58.615 ], 00:23:58.615 "driver_specific": { 00:23:58.615 "nvme": [ 00:23:58.615 { 00:23:58.615 "trid": { 00:23:58.615 "trtype": "TCP", 00:23:58.615 "adrfam": "IPv4", 00:23:58.615 "traddr": "10.0.0.2", 00:23:58.615 "trsvcid": "4421", 00:23:58.615 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:58.615 }, 00:23:58.615 "ctrlr_data": { 00:23:58.615 "cntlid": 3, 00:23:58.615 "vendor_id": "0x8086", 00:23:58.615 "model_number": "SPDK bdev Controller", 00:23:58.615 "serial_number": "00000000000000000000", 00:23:58.615 "firmware_revision": "24.09", 00:23:58.615 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.615 "oacs": { 00:23:58.615 "security": 0, 00:23:58.615 "format": 0, 00:23:58.615 "firmware": 0, 00:23:58.615 "ns_manage": 0 00:23:58.615 }, 00:23:58.615 "multi_ctrlr": true, 00:23:58.615 "ana_reporting": false 00:23:58.615 }, 00:23:58.615 "vs": { 00:23:58.615 "nvme_version": "1.3" 00:23:58.615 }, 00:23:58.615 "ns_data": { 00:23:58.615 "id": 1, 00:23:58.615 "can_share": true 00:23:58.615 } 00:23:58.615 } 00:23:58.615 ], 00:23:58.615 "mp_policy": "active_passive" 00:23:58.615 } 00:23:58.615 } 00:23:58.615 ] 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.615 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.876 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.876 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.65hltyCiFg 00:23:58.876 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:58.876 21:39:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:58.876 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:58.876 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:58.876 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:58.876 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:58.876 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:58.876 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:58.876 rmmod nvme_tcp 00:23:58.876 rmmod nvme_fabrics 00:23:58.876 rmmod nvme_keyring 00:23:58.876 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:58.876 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:58.876 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:58.876 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2269651 ']' 00:23:58.876 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2269651 00:23:58.877 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2269651 ']' 00:23:58.877 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2269651 00:23:58.877 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:23:58.877 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:58.877 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2269651 00:23:58.877 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:58.877 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:58.877 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2269651' 00:23:58.877 killing process with pid 2269651 00:23:58.877 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2269651 00:23:58.877 [2024-07-15 21:39:48.579528] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:58.877 [2024-07-15 21:39:48.579556] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:58.877 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2269651 00:23:59.138 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:59.138 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:59.138 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:59.138 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.138 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.138 21:39:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.138 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.138 21:39:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.053 21:39:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:01.053 00:24:01.053 real 0m11.244s 00:24:01.053 user 0m3.970s 00:24:01.053 sys 0m5.750s 00:24:01.053 21:39:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:01.053 21:39:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.053 ************************************ 00:24:01.053 END TEST nvmf_async_init 00:24:01.053 ************************************ 00:24:01.053 21:39:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:01.053 21:39:50 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:01.053 21:39:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:01.053 21:39:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.053 21:39:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:01.053 ************************************ 00:24:01.053 START TEST dma 00:24:01.053 ************************************ 00:24:01.053 21:39:50 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:01.316 * Looking for test storage... 00:24:01.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:01.316 21:39:50 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.316 21:39:50 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.316 21:39:50 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.316 21:39:50 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.316 21:39:50 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.316 21:39:50 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.316 21:39:50 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.316 21:39:50 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:01.316 21:39:50 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.316 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.317 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.317 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.317 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.317 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.317 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.317 21:39:50 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.317 21:39:50 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:01.317 21:39:50 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:01.317 00:24:01.317 real 0m0.129s 00:24:01.317 user 0m0.063s 00:24:01.317 sys 0m0.074s 00:24:01.317 21:39:50 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:01.317 21:39:50 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:01.317 ************************************ 00:24:01.317 END TEST dma 00:24:01.317 ************************************ 00:24:01.317 21:39:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:01.317 21:39:51 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:01.317 21:39:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:01.317 21:39:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.317 21:39:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:01.317 ************************************ 00:24:01.317 START TEST nvmf_identify 00:24:01.317 ************************************ 00:24:01.317 21:39:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:01.579 * Looking for test storage... 00:24:01.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:01.579 21:39:51 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.579 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:01.579 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.579 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.579 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.579 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.579 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:01.580 21:39:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:08.172 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:08.172 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:08.172 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:08.172 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.172 21:39:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:08.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:24:08.434 00:24:08.434 --- 10.0.0.2 ping statistics --- 00:24:08.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.434 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:24:08.434 00:24:08.434 --- 10.0.0.1 ping statistics --- 00:24:08.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.434 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:08.434 21:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:08.695 21:39:58 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2274051 00:24:08.695 21:39:58 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:08.696 21:39:58 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:08.696 21:39:58 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2274051 00:24:08.696 21:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2274051 ']' 00:24:08.696 21:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.696 21:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.696 21:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.696 21:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.696 21:39:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:08.696 [2024-07-15 21:39:58.304110] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:24:08.696 [2024-07-15 21:39:58.304205] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.696 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.696 [2024-07-15 21:39:58.376700] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:08.696 [2024-07-15 21:39:58.452202] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.696 [2024-07-15 21:39:58.452240] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.696 [2024-07-15 21:39:58.452248] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.696 [2024-07-15 21:39:58.452254] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.696 [2024-07-15 21:39:58.452260] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.696 [2024-07-15 21:39:58.452418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.696 [2024-07-15 21:39:58.452540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.696 [2024-07-15 21:39:58.452697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.696 [2024-07-15 21:39:58.452699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.268 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.268 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:09.268 21:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:09.268 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.268 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.533 [2024-07-15 21:39:59.073647] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.533 Malloc0 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.533 [2024-07-15 21:39:59.173089] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.533 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.533 [ 00:24:09.533 { 00:24:09.533 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:09.533 "subtype": "Discovery", 00:24:09.533 "listen_addresses": [ 00:24:09.533 { 00:24:09.533 "trtype": "TCP", 00:24:09.533 "adrfam": "IPv4", 00:24:09.533 "traddr": "10.0.0.2", 00:24:09.533 "trsvcid": "4420" 00:24:09.533 } 00:24:09.533 ], 00:24:09.533 "allow_any_host": true, 00:24:09.533 "hosts": [] 00:24:09.533 }, 00:24:09.533 { 00:24:09.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.533 "subtype": "NVMe", 00:24:09.533 "listen_addresses": [ 00:24:09.533 { 00:24:09.533 "trtype": "TCP", 00:24:09.533 "adrfam": "IPv4", 00:24:09.533 "traddr": "10.0.0.2", 00:24:09.533 "trsvcid": "4420" 00:24:09.533 } 00:24:09.533 ], 00:24:09.533 "allow_any_host": true, 00:24:09.533 "hosts": [], 00:24:09.533 "serial_number": "SPDK00000000000001", 00:24:09.533 "model_number": "SPDK bdev Controller", 00:24:09.533 "max_namespaces": 32, 00:24:09.533 "min_cntlid": 1, 00:24:09.533 "max_cntlid": 65519, 00:24:09.533 "namespaces": [ 00:24:09.533 { 00:24:09.533 "nsid": 1, 00:24:09.533 "bdev_name": "Malloc0", 00:24:09.533 "name": "Malloc0", 00:24:09.533 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:09.533 "eui64": "ABCDEF0123456789", 00:24:09.533 "uuid": "4b9f2048-9477-4498-b014-bc83f73db2c3" 00:24:09.534 } 00:24:09.534 ] 00:24:09.534 } 00:24:09.534 ] 00:24:09.534 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.534 21:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:09.534 [2024-07-15 21:39:59.235421] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:24:09.534 [2024-07-15 21:39:59.235464] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274359 ] 00:24:09.534 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.534 [2024-07-15 21:39:59.268792] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:09.534 [2024-07-15 21:39:59.268843] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:09.534 [2024-07-15 21:39:59.268848] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:09.534 [2024-07-15 21:39:59.268860] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:09.534 [2024-07-15 21:39:59.268867] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:09.534 [2024-07-15 21:39:59.272149] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:09.534 [2024-07-15 21:39:59.272183] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x228aec0 0 00:24:09.534 [2024-07-15 21:39:59.280133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:09.534 [2024-07-15 21:39:59.280145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:09.534 [2024-07-15 21:39:59.280149] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:09.534 [2024-07-15 21:39:59.280153] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:09.534 [2024-07-15 21:39:59.280191] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.280197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.280201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x228aec0) 00:24:09.534 [2024-07-15 21:39:59.280213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:09.534 [2024-07-15 21:39:59.280229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230dfc0, cid 0, qid 0 00:24:09.534 [2024-07-15 21:39:59.288134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.534 [2024-07-15 21:39:59.288143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.534 [2024-07-15 21:39:59.288146] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.288151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230dfc0) on tqpair=0x228aec0 00:24:09.534 [2024-07-15 21:39:59.288160] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:09.534 [2024-07-15 21:39:59.288167] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:09.534 [2024-07-15 21:39:59.288173] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:09.534 [2024-07-15 21:39:59.288185] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.288189] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.288193] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x228aec0) 00:24:09.534 [2024-07-15 21:39:59.288200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.534 [2024-07-15 21:39:59.288213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230dfc0, cid 0, qid 0 00:24:09.534 [2024-07-15 21:39:59.288459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.534 [2024-07-15 21:39:59.288466] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.534 [2024-07-15 21:39:59.288470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.288474] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230dfc0) on tqpair=0x228aec0 00:24:09.534 [2024-07-15 21:39:59.288479] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:09.534 [2024-07-15 21:39:59.288490] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:09.534 [2024-07-15 21:39:59.288498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.288501] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.288505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x228aec0) 00:24:09.534 [2024-07-15 21:39:59.288512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.534 [2024-07-15 21:39:59.288522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230dfc0, cid 0, qid 0 00:24:09.534 [2024-07-15 21:39:59.288718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.534 [2024-07-15 21:39:59.288724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.534 [2024-07-15 21:39:59.288728] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.288732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230dfc0) on tqpair=0x228aec0 00:24:09.534 [2024-07-15 21:39:59.288737] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:09.534 [2024-07-15 21:39:59.288744] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:09.534 [2024-07-15 21:39:59.288751] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.288754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.288758] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x228aec0) 00:24:09.534 [2024-07-15 21:39:59.288765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.534 [2024-07-15 21:39:59.288774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230dfc0, cid 0, qid 0 00:24:09.534 [2024-07-15 21:39:59.288968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.534 [2024-07-15 21:39:59.288976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.534 [2024-07-15 21:39:59.288982] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.288988] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230dfc0) on tqpair=0x228aec0 00:24:09.534 [2024-07-15 21:39:59.288996] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:09.534 [2024-07-15 21:39:59.289006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.289011] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.289015] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x228aec0) 00:24:09.534 [2024-07-15 21:39:59.289022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.534 [2024-07-15 21:39:59.289032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230dfc0, cid 0, qid 0 00:24:09.534 [2024-07-15 21:39:59.289277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.534 [2024-07-15 21:39:59.289284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.534 [2024-07-15 21:39:59.289287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.289291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230dfc0) on tqpair=0x228aec0 00:24:09.534 [2024-07-15 21:39:59.289296] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:09.534 [2024-07-15 21:39:59.289300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:09.534 [2024-07-15 21:39:59.289310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:09.534 [2024-07-15 21:39:59.289415] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:09.534 [2024-07-15 21:39:59.289420] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:09.534 [2024-07-15 21:39:59.289428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.289432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.289435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x228aec0) 00:24:09.534 [2024-07-15 21:39:59.289442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.534 [2024-07-15 21:39:59.289452] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230dfc0, cid 0, qid 0 00:24:09.534 [2024-07-15 21:39:59.289677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.534 [2024-07-15 21:39:59.289684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.534 [2024-07-15 21:39:59.289687] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.289693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230dfc0) on tqpair=0x228aec0 00:24:09.534 [2024-07-15 21:39:59.289699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:09.534 [2024-07-15 21:39:59.289708] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.289711] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.289715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x228aec0) 00:24:09.534 [2024-07-15 21:39:59.289723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.534 [2024-07-15 21:39:59.289733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230dfc0, cid 0, qid 0 00:24:09.534 [2024-07-15 21:39:59.289971] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.534 [2024-07-15 21:39:59.289979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.534 [2024-07-15 21:39:59.289983] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.289986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230dfc0) on tqpair=0x228aec0 00:24:09.534 [2024-07-15 21:39:59.289991] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:09.534 [2024-07-15 21:39:59.289995] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:09.534 [2024-07-15 21:39:59.290003] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:09.534 [2024-07-15 21:39:59.290011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:09.534 [2024-07-15 21:39:59.290020] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.534 [2024-07-15 21:39:59.290024] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x228aec0) 00:24:09.534 [2024-07-15 21:39:59.290031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.534 [2024-07-15 21:39:59.290041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230dfc0, cid 0, qid 0 00:24:09.535 [2024-07-15 21:39:59.290269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.535 [2024-07-15 21:39:59.290276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.535 [2024-07-15 21:39:59.290282] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290287] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x228aec0): datao=0, datal=4096, cccid=0 00:24:09.535 [2024-07-15 21:39:59.290291] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230dfc0) on tqpair(0x228aec0): expected_datao=0, payload_size=4096 00:24:09.535 [2024-07-15 21:39:59.290296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290303] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290307] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.535 [2024-07-15 21:39:59.290555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.535 [2024-07-15 21:39:59.290559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230dfc0) on tqpair=0x228aec0 00:24:09.535 [2024-07-15 21:39:59.290570] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:09.535 [2024-07-15 21:39:59.290577] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:09.535 [2024-07-15 21:39:59.290582] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:09.535 [2024-07-15 21:39:59.290587] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:09.535 [2024-07-15 21:39:59.290591] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:09.535 [2024-07-15 21:39:59.290596] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:09.535 [2024-07-15 21:39:59.290603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:09.535 [2024-07-15 21:39:59.290610] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x228aec0) 00:24:09.535 [2024-07-15 21:39:59.290625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:09.535 [2024-07-15 21:39:59.290635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230dfc0, cid 0, qid 0 00:24:09.535 [2024-07-15 21:39:59.290852] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.535 [2024-07-15 21:39:59.290858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.535 [2024-07-15 21:39:59.290862] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230dfc0) on tqpair=0x228aec0 00:24:09.535 [2024-07-15 21:39:59.290873] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290876] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x228aec0) 00:24:09.535 [2024-07-15 21:39:59.290886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.535 [2024-07-15 21:39:59.290892] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290896] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290899] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x228aec0) 00:24:09.535 [2024-07-15 21:39:59.290905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.535 [2024-07-15 21:39:59.290913] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x228aec0) 00:24:09.535 [2024-07-15 21:39:59.290926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.535 [2024-07-15 21:39:59.290932] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290935] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290939] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x228aec0) 00:24:09.535 [2024-07-15 21:39:59.290944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.535 [2024-07-15 21:39:59.290949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:09.535 [2024-07-15 21:39:59.290959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:09.535 [2024-07-15 21:39:59.290966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.290969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x228aec0) 00:24:09.535 [2024-07-15 21:39:59.290976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.535 [2024-07-15 21:39:59.290987] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230dfc0, cid 0, qid 0 00:24:09.535 [2024-07-15 21:39:59.290993] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e140, cid 1, qid 0 00:24:09.535 [2024-07-15 21:39:59.290997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e2c0, cid 2, qid 0 00:24:09.535 [2024-07-15 21:39:59.291002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e440, cid 3, qid 0 00:24:09.535 [2024-07-15 21:39:59.291007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e5c0, cid 4, qid 0 00:24:09.535 [2024-07-15 21:39:59.291286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.535 [2024-07-15 21:39:59.291293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.535 [2024-07-15 21:39:59.291297] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.291300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e5c0) on tqpair=0x228aec0 00:24:09.535 [2024-07-15 21:39:59.291305] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:09.535 [2024-07-15 21:39:59.291310] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:09.535 [2024-07-15 21:39:59.291320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.291324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x228aec0) 00:24:09.535 [2024-07-15 21:39:59.291331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.535 [2024-07-15 21:39:59.291341] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e5c0, cid 4, qid 0 00:24:09.535 [2024-07-15 21:39:59.291592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.535 [2024-07-15 21:39:59.291598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.535 [2024-07-15 21:39:59.291601] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.291605] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x228aec0): datao=0, datal=4096, cccid=4 00:24:09.535 [2024-07-15 21:39:59.291612] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230e5c0) on tqpair(0x228aec0): expected_datao=0, payload_size=4096 00:24:09.535 [2024-07-15 21:39:59.291616] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.291737] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.291741] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.291946] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.535 [2024-07-15 21:39:59.291952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.535 [2024-07-15 21:39:59.291956] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.291960] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e5c0) on tqpair=0x228aec0 00:24:09.535 [2024-07-15 21:39:59.291971] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:09.535 [2024-07-15 21:39:59.291992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.291997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x228aec0) 00:24:09.535 [2024-07-15 21:39:59.292003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.535 [2024-07-15 21:39:59.292010] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.292014] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.292017] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x228aec0) 00:24:09.535 [2024-07-15 21:39:59.292023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.535 [2024-07-15 21:39:59.292036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e5c0, cid 4, qid 0 00:24:09.535 [2024-07-15 21:39:59.292042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e740, cid 5, qid 0 00:24:09.535 [2024-07-15 21:39:59.296159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.535 [2024-07-15 21:39:59.296169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.535 [2024-07-15 21:39:59.296173] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.296176] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x228aec0): datao=0, datal=1024, cccid=4 00:24:09.535 [2024-07-15 21:39:59.296181] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230e5c0) on tqpair(0x228aec0): expected_datao=0, payload_size=1024 00:24:09.535 [2024-07-15 21:39:59.296185] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.296192] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.296196] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.296201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.535 [2024-07-15 21:39:59.296207] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.535 [2024-07-15 21:39:59.296210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.535 [2024-07-15 21:39:59.296214] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e740) on tqpair=0x228aec0 00:24:09.856 [2024-07-15 21:39:59.336137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.856 [2024-07-15 21:39:59.336153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.856 [2024-07-15 21:39:59.336157] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.856 [2024-07-15 21:39:59.336161] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e5c0) on tqpair=0x228aec0 00:24:09.856 [2024-07-15 21:39:59.336180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.856 [2024-07-15 21:39:59.336185] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x228aec0) 00:24:09.856 [2024-07-15 21:39:59.336193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.856 [2024-07-15 21:39:59.336216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e5c0, cid 4, qid 0 00:24:09.856 [2024-07-15 21:39:59.336475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.856 [2024-07-15 21:39:59.336483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.856 [2024-07-15 21:39:59.336486] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.856 [2024-07-15 21:39:59.336490] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x228aec0): datao=0, datal=3072, cccid=4 00:24:09.856 [2024-07-15 21:39:59.336494] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230e5c0) on tqpair(0x228aec0): expected_datao=0, payload_size=3072 00:24:09.856 [2024-07-15 21:39:59.336498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.856 [2024-07-15 21:39:59.336549] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.856 [2024-07-15 21:39:59.336553] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.856 [2024-07-15 21:39:59.378310] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.856 [2024-07-15 21:39:59.378320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.856 [2024-07-15 21:39:59.378324] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.856 [2024-07-15 21:39:59.378328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e5c0) on tqpair=0x228aec0 00:24:09.856 [2024-07-15 21:39:59.378338] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.856 [2024-07-15 21:39:59.378341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x228aec0) 00:24:09.856 [2024-07-15 21:39:59.378349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.856 [2024-07-15 21:39:59.378363] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e5c0, cid 4, qid 0 00:24:09.856 [2024-07-15 21:39:59.378581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.856 [2024-07-15 21:39:59.378588] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.856 [2024-07-15 21:39:59.378591] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.856 [2024-07-15 21:39:59.378595] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x228aec0): datao=0, datal=8, cccid=4 00:24:09.857 [2024-07-15 21:39:59.378599] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230e5c0) on tqpair(0x228aec0): expected_datao=0, payload_size=8 00:24:09.857 [2024-07-15 21:39:59.378604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.857 [2024-07-15 21:39:59.378610] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.857 [2024-07-15 21:39:59.378614] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.857 [2024-07-15 21:39:59.424131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.857 [2024-07-15 21:39:59.424141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.857 [2024-07-15 21:39:59.424144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.857 [2024-07-15 21:39:59.424148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e5c0) on tqpair=0x228aec0 00:24:09.857 ===================================================== 00:24:09.857 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:09.857 ===================================================== 00:24:09.857 Controller Capabilities/Features 00:24:09.857 ================================ 00:24:09.857 Vendor ID: 0000 00:24:09.857 Subsystem Vendor ID: 0000 00:24:09.857 Serial Number: .................... 00:24:09.857 Model Number: ........................................ 00:24:09.857 Firmware Version: 24.09 00:24:09.857 Recommended Arb Burst: 0 00:24:09.857 IEEE OUI Identifier: 00 00 00 00:24:09.857 Multi-path I/O 00:24:09.857 May have multiple subsystem ports: No 00:24:09.857 May have multiple controllers: No 00:24:09.857 Associated with SR-IOV VF: No 00:24:09.857 Max Data Transfer Size: 131072 00:24:09.857 Max Number of Namespaces: 0 00:24:09.857 Max Number of I/O Queues: 1024 00:24:09.857 NVMe Specification Version (VS): 1.3 00:24:09.857 NVMe Specification Version (Identify): 1.3 00:24:09.857 Maximum Queue Entries: 128 00:24:09.857 Contiguous Queues Required: Yes 00:24:09.857 Arbitration Mechanisms Supported 00:24:09.857 Weighted Round Robin: Not Supported 00:24:09.857 Vendor Specific: Not Supported 00:24:09.857 Reset Timeout: 15000 ms 00:24:09.857 Doorbell Stride: 4 bytes 00:24:09.857 NVM Subsystem Reset: Not Supported 00:24:09.857 Command Sets Supported 00:24:09.857 NVM Command Set: Supported 00:24:09.857 Boot Partition: Not Supported 00:24:09.857 Memory Page Size Minimum: 4096 bytes 00:24:09.857 Memory Page Size Maximum: 4096 bytes 00:24:09.857 Persistent Memory Region: Not Supported 00:24:09.857 Optional Asynchronous Events Supported 00:24:09.857 Namespace Attribute Notices: Not Supported 00:24:09.857 Firmware Activation Notices: Not Supported 00:24:09.857 ANA Change Notices: Not Supported 00:24:09.857 PLE Aggregate Log Change Notices: Not Supported 00:24:09.857 LBA Status Info Alert Notices: Not Supported 00:24:09.857 EGE Aggregate Log Change Notices: Not Supported 00:24:09.857 Normal NVM Subsystem Shutdown event: Not Supported 00:24:09.857 Zone Descriptor Change Notices: Not Supported 00:24:09.857 Discovery Log Change Notices: Supported 00:24:09.857 Controller Attributes 00:24:09.857 128-bit Host Identifier: Not Supported 00:24:09.857 Non-Operational Permissive Mode: Not Supported 00:24:09.857 NVM Sets: Not Supported 00:24:09.857 Read Recovery Levels: Not Supported 00:24:09.857 Endurance Groups: Not Supported 00:24:09.857 Predictable Latency Mode: Not Supported 00:24:09.857 Traffic Based Keep ALive: Not Supported 00:24:09.857 Namespace Granularity: Not Supported 00:24:09.857 SQ Associations: Not Supported 00:24:09.857 UUID List: Not Supported 00:24:09.857 Multi-Domain Subsystem: Not Supported 00:24:09.857 Fixed Capacity Management: Not Supported 00:24:09.857 Variable Capacity Management: Not Supported 00:24:09.857 Delete Endurance Group: Not Supported 00:24:09.857 Delete NVM Set: Not Supported 00:24:09.857 Extended LBA Formats Supported: Not Supported 00:24:09.857 Flexible Data Placement Supported: Not Supported 00:24:09.857 00:24:09.857 Controller Memory Buffer Support 00:24:09.857 ================================ 00:24:09.857 Supported: No 00:24:09.857 00:24:09.857 Persistent Memory Region Support 00:24:09.857 ================================ 00:24:09.857 Supported: No 00:24:09.857 00:24:09.857 Admin Command Set Attributes 00:24:09.857 ============================ 00:24:09.857 Security Send/Receive: Not Supported 00:24:09.857 Format NVM: Not Supported 00:24:09.857 Firmware Activate/Download: Not Supported 00:24:09.857 Namespace Management: Not Supported 00:24:09.857 Device Self-Test: Not Supported 00:24:09.857 Directives: Not Supported 00:24:09.857 NVMe-MI: Not Supported 00:24:09.857 Virtualization Management: Not Supported 00:24:09.857 Doorbell Buffer Config: Not Supported 00:24:09.857 Get LBA Status Capability: Not Supported 00:24:09.857 Command & Feature Lockdown Capability: Not Supported 00:24:09.857 Abort Command Limit: 1 00:24:09.857 Async Event Request Limit: 4 00:24:09.857 Number of Firmware Slots: N/A 00:24:09.857 Firmware Slot 1 Read-Only: N/A 00:24:09.857 Firmware Activation Without Reset: N/A 00:24:09.857 Multiple Update Detection Support: N/A 00:24:09.857 Firmware Update Granularity: No Information Provided 00:24:09.857 Per-Namespace SMART Log: No 00:24:09.857 Asymmetric Namespace Access Log Page: Not Supported 00:24:09.857 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:09.857 Command Effects Log Page: Not Supported 00:24:09.857 Get Log Page Extended Data: Supported 00:24:09.857 Telemetry Log Pages: Not Supported 00:24:09.857 Persistent Event Log Pages: Not Supported 00:24:09.857 Supported Log Pages Log Page: May Support 00:24:09.857 Commands Supported & Effects Log Page: Not Supported 00:24:09.857 Feature Identifiers & Effects Log Page:May Support 00:24:09.857 NVMe-MI Commands & Effects Log Page: May Support 00:24:09.857 Data Area 4 for Telemetry Log: Not Supported 00:24:09.857 Error Log Page Entries Supported: 128 00:24:09.857 Keep Alive: Not Supported 00:24:09.857 00:24:09.857 NVM Command Set Attributes 00:24:09.857 ========================== 00:24:09.857 Submission Queue Entry Size 00:24:09.857 Max: 1 00:24:09.857 Min: 1 00:24:09.857 Completion Queue Entry Size 00:24:09.857 Max: 1 00:24:09.857 Min: 1 00:24:09.857 Number of Namespaces: 0 00:24:09.857 Compare Command: Not Supported 00:24:09.857 Write Uncorrectable Command: Not Supported 00:24:09.857 Dataset Management Command: Not Supported 00:24:09.857 Write Zeroes Command: Not Supported 00:24:09.857 Set Features Save Field: Not Supported 00:24:09.857 Reservations: Not Supported 00:24:09.857 Timestamp: Not Supported 00:24:09.857 Copy: Not Supported 00:24:09.857 Volatile Write Cache: Not Present 00:24:09.857 Atomic Write Unit (Normal): 1 00:24:09.857 Atomic Write Unit (PFail): 1 00:24:09.857 Atomic Compare & Write Unit: 1 00:24:09.857 Fused Compare & Write: Supported 00:24:09.857 Scatter-Gather List 00:24:09.857 SGL Command Set: Supported 00:24:09.857 SGL Keyed: Supported 00:24:09.857 SGL Bit Bucket Descriptor: Not Supported 00:24:09.857 SGL Metadata Pointer: Not Supported 00:24:09.857 Oversized SGL: Not Supported 00:24:09.857 SGL Metadata Address: Not Supported 00:24:09.858 SGL Offset: Supported 00:24:09.858 Transport SGL Data Block: Not Supported 00:24:09.858 Replay Protected Memory Block: Not Supported 00:24:09.858 00:24:09.858 Firmware Slot Information 00:24:09.858 ========================= 00:24:09.858 Active slot: 0 00:24:09.858 00:24:09.858 00:24:09.858 Error Log 00:24:09.858 ========= 00:24:09.858 00:24:09.858 Active Namespaces 00:24:09.858 ================= 00:24:09.858 Discovery Log Page 00:24:09.858 ================== 00:24:09.858 Generation Counter: 2 00:24:09.858 Number of Records: 2 00:24:09.858 Record Format: 0 00:24:09.858 00:24:09.858 Discovery Log Entry 0 00:24:09.858 ---------------------- 00:24:09.858 Transport Type: 3 (TCP) 00:24:09.858 Address Family: 1 (IPv4) 00:24:09.858 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:09.858 Entry Flags: 00:24:09.858 Duplicate Returned Information: 1 00:24:09.858 Explicit Persistent Connection Support for Discovery: 1 00:24:09.858 Transport Requirements: 00:24:09.858 Secure Channel: Not Required 00:24:09.858 Port ID: 0 (0x0000) 00:24:09.858 Controller ID: 65535 (0xffff) 00:24:09.858 Admin Max SQ Size: 128 00:24:09.858 Transport Service Identifier: 4420 00:24:09.858 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:09.858 Transport Address: 10.0.0.2 00:24:09.858 Discovery Log Entry 1 00:24:09.858 ---------------------- 00:24:09.858 Transport Type: 3 (TCP) 00:24:09.858 Address Family: 1 (IPv4) 00:24:09.858 Subsystem Type: 2 (NVM Subsystem) 00:24:09.858 Entry Flags: 00:24:09.858 Duplicate Returned Information: 0 00:24:09.858 Explicit Persistent Connection Support for Discovery: 0 00:24:09.858 Transport Requirements: 00:24:09.858 Secure Channel: Not Required 00:24:09.858 Port ID: 0 (0x0000) 00:24:09.858 Controller ID: 65535 (0xffff) 00:24:09.858 Admin Max SQ Size: 128 00:24:09.858 Transport Service Identifier: 4420 00:24:09.858 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:09.858 Transport Address: 10.0.0.2 [2024-07-15 21:39:59.424240] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:09.858 [2024-07-15 21:39:59.424251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230dfc0) on tqpair=0x228aec0 00:24:09.858 [2024-07-15 21:39:59.424258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.858 [2024-07-15 21:39:59.424263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e140) on tqpair=0x228aec0 00:24:09.858 [2024-07-15 21:39:59.424267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.858 [2024-07-15 21:39:59.424272] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e2c0) on tqpair=0x228aec0 00:24:09.858 [2024-07-15 21:39:59.424278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.858 [2024-07-15 21:39:59.424283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e440) on tqpair=0x228aec0 00:24:09.858 [2024-07-15 21:39:59.424288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.858 [2024-07-15 21:39:59.424299] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.858 [2024-07-15 21:39:59.424303] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.858 [2024-07-15 21:39:59.424306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x228aec0) 00:24:09.858 [2024-07-15 21:39:59.424314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.858 [2024-07-15 21:39:59.424328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e440, cid 3, qid 0 00:24:09.858 [2024-07-15 21:39:59.424562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.858 [2024-07-15 21:39:59.424569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.858 [2024-07-15 21:39:59.424572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.858 [2024-07-15 21:39:59.424576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e440) on tqpair=0x228aec0 00:24:09.858 [2024-07-15 21:39:59.424583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.858 [2024-07-15 21:39:59.424587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.858 [2024-07-15 21:39:59.424590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x228aec0) 00:24:09.858 [2024-07-15 21:39:59.424597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.858 [2024-07-15 21:39:59.424611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e440, cid 3, qid 0 00:24:09.858 [2024-07-15 21:39:59.424836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.858 [2024-07-15 21:39:59.424842] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.858 [2024-07-15 21:39:59.424846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.858 [2024-07-15 21:39:59.424850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e440) on tqpair=0x228aec0 00:24:09.858 [2024-07-15 21:39:59.424854] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:09.858 [2024-07-15 21:39:59.424859] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:09.858 [2024-07-15 21:39:59.424868] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.858 [2024-07-15 21:39:59.424872] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.858 [2024-07-15 21:39:59.424876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x228aec0) 00:24:09.858 [2024-07-15 21:39:59.424882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.858 [2024-07-15 21:39:59.424892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e440, cid 3, qid 0 00:24:09.858 [2024-07-15 21:39:59.425130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.858 [2024-07-15 21:39:59.425137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.858 [2024-07-15 21:39:59.425140] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.858 [2024-07-15 21:39:59.425144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e440) on tqpair=0x228aec0 00:24:09.858 [2024-07-15 21:39:59.425154] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.425158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.425161] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x228aec0) 00:24:09.859 [2024-07-15 21:39:59.425171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.859 [2024-07-15 21:39:59.425181] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e440, cid 3, qid 0 00:24:09.859 [2024-07-15 21:39:59.425397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.859 [2024-07-15 21:39:59.425403] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.859 [2024-07-15 21:39:59.425407] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.425410] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e440) on tqpair=0x228aec0 00:24:09.859 [2024-07-15 21:39:59.425420] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.425424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.425428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x228aec0) 00:24:09.859 [2024-07-15 21:39:59.425434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.859 [2024-07-15 21:39:59.425444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e440, cid 3, qid 0 00:24:09.859 [2024-07-15 21:39:59.425675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.859 [2024-07-15 21:39:59.425682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.859 [2024-07-15 21:39:59.425685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.425689] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e440) on tqpair=0x228aec0 00:24:09.859 [2024-07-15 21:39:59.425699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.425703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.425706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x228aec0) 00:24:09.859 [2024-07-15 21:39:59.425713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.859 [2024-07-15 21:39:59.425722] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e440, cid 3, qid 0 00:24:09.859 [2024-07-15 21:39:59.425919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.859 [2024-07-15 21:39:59.425925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.859 [2024-07-15 21:39:59.425929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.425933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e440) on tqpair=0x228aec0 00:24:09.859 [2024-07-15 21:39:59.425942] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.425946] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.425949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x228aec0) 00:24:09.859 [2024-07-15 21:39:59.425956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.859 [2024-07-15 21:39:59.425965] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e440, cid 3, qid 0 00:24:09.859 [2024-07-15 21:39:59.426195] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.859 [2024-07-15 21:39:59.426202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.859 [2024-07-15 21:39:59.426205] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.426209] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e440) on tqpair=0x228aec0 00:24:09.859 [2024-07-15 21:39:59.426219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.426223] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.426226] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x228aec0) 00:24:09.859 [2024-07-15 21:39:59.426233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.859 [2024-07-15 21:39:59.426245] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e440, cid 3, qid 0 00:24:09.859 [2024-07-15 21:39:59.426465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.859 [2024-07-15 21:39:59.426471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.859 [2024-07-15 21:39:59.426474] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.426478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e440) on tqpair=0x228aec0 00:24:09.859 [2024-07-15 21:39:59.426488] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.426491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.426495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x228aec0) 00:24:09.859 [2024-07-15 21:39:59.426501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.859 [2024-07-15 21:39:59.426511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e440, cid 3, qid 0 00:24:09.859 [2024-07-15 21:39:59.426714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.859 [2024-07-15 21:39:59.426721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.859 [2024-07-15 21:39:59.426724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.426728] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e440) on tqpair=0x228aec0 00:24:09.859 [2024-07-15 21:39:59.426738] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.426742] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.426745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x228aec0) 00:24:09.859 [2024-07-15 21:39:59.426752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.859 [2024-07-15 21:39:59.426761] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e440, cid 3, qid 0 00:24:09.859 [2024-07-15 21:39:59.426950] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.859 [2024-07-15 21:39:59.426956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.859 [2024-07-15 21:39:59.426959] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.426963] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e440) on tqpair=0x228aec0 00:24:09.859 [2024-07-15 21:39:59.426973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.426977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.426980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x228aec0) 00:24:09.859 [2024-07-15 21:39:59.426986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.859 [2024-07-15 21:39:59.426996] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e440, cid 3, qid 0 00:24:09.859 [2024-07-15 21:39:59.427187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.859 [2024-07-15 21:39:59.427194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.859 [2024-07-15 21:39:59.427197] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.859 [2024-07-15 21:39:59.427201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e440) on tqpair=0x228aec0 00:24:09.859 [2024-07-15 21:39:59.427211] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.427215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.427218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x228aec0) 00:24:09.860 [2024-07-15 21:39:59.427225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-07-15 21:39:59.427235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e440, cid 3, qid 0 00:24:09.860 [2024-07-15 21:39:59.427449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.860 [2024-07-15 21:39:59.427456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.860 [2024-07-15 21:39:59.427459] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.427463] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e440) on tqpair=0x228aec0 00:24:09.860 [2024-07-15 21:39:59.427473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.427477] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.427480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x228aec0) 00:24:09.860 [2024-07-15 21:39:59.427487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-07-15 21:39:59.427497] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e440, cid 3, qid 0 00:24:09.860 [2024-07-15 21:39:59.427743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.860 [2024-07-15 21:39:59.427750] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.860 [2024-07-15 21:39:59.427753] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.427757] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e440) on tqpair=0x228aec0 00:24:09.860 [2024-07-15 21:39:59.427766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.427770] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.427774] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x228aec0) 00:24:09.860 [2024-07-15 21:39:59.427780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-07-15 21:39:59.427790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e440, cid 3, qid 0 00:24:09.860 [2024-07-15 21:39:59.428014] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.860 [2024-07-15 21:39:59.428021] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.860 [2024-07-15 21:39:59.428024] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.428028] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e440) on tqpair=0x228aec0 00:24:09.860 [2024-07-15 21:39:59.428038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.428042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.428045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x228aec0) 00:24:09.860 [2024-07-15 21:39:59.428052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-07-15 21:39:59.428061] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230e440, cid 3, qid 0 00:24:09.860 [2024-07-15 21:39:59.432150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.860 [2024-07-15 21:39:59.432159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.860 [2024-07-15 21:39:59.432163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.432167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230e440) on tqpair=0x228aec0 00:24:09.860 [2024-07-15 21:39:59.432175] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:24:09.860 00:24:09.860 21:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:09.860 [2024-07-15 21:39:59.472800] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:24:09.860 [2024-07-15 21:39:59.472869] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274400 ] 00:24:09.860 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.860 [2024-07-15 21:39:59.508676] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:09.860 [2024-07-15 21:39:59.508724] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:09.860 [2024-07-15 21:39:59.508729] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:09.860 [2024-07-15 21:39:59.508740] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:09.860 [2024-07-15 21:39:59.508745] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:09.860 [2024-07-15 21:39:59.509240] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:09.860 [2024-07-15 21:39:59.509263] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e0aec0 0 00:24:09.860 [2024-07-15 21:39:59.524132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:09.860 [2024-07-15 21:39:59.524145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:09.860 [2024-07-15 21:39:59.524149] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:09.860 [2024-07-15 21:39:59.524153] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:09.860 [2024-07-15 21:39:59.524187] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.524192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.524196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e0aec0) 00:24:09.860 [2024-07-15 21:39:59.524208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:09.860 [2024-07-15 21:39:59.524225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8dfc0, cid 0, qid 0 00:24:09.860 [2024-07-15 21:39:59.532133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.860 [2024-07-15 21:39:59.532142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.860 [2024-07-15 21:39:59.532145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.532150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8dfc0) on tqpair=0x1e0aec0 00:24:09.860 [2024-07-15 21:39:59.532161] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:09.860 [2024-07-15 21:39:59.532167] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:09.860 [2024-07-15 21:39:59.532172] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:09.860 [2024-07-15 21:39:59.532183] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.532188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.860 [2024-07-15 21:39:59.532191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e0aec0) 00:24:09.860 [2024-07-15 21:39:59.532198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.860 [2024-07-15 21:39:59.532211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8dfc0, cid 0, qid 0 00:24:09.860 [2024-07-15 21:39:59.532420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.860 [2024-07-15 21:39:59.532427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.861 [2024-07-15 21:39:59.532430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.532434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8dfc0) on tqpair=0x1e0aec0 00:24:09.861 [2024-07-15 21:39:59.532442] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:09.861 [2024-07-15 21:39:59.532450] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:09.861 [2024-07-15 21:39:59.532457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.532460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.532464] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e0aec0) 00:24:09.861 [2024-07-15 21:39:59.532471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-07-15 21:39:59.532482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8dfc0, cid 0, qid 0 00:24:09.861 [2024-07-15 21:39:59.532673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.861 [2024-07-15 21:39:59.532679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.861 [2024-07-15 21:39:59.532683] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.532686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8dfc0) on tqpair=0x1e0aec0 00:24:09.861 [2024-07-15 21:39:59.532691] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:09.861 [2024-07-15 21:39:59.532699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:09.861 [2024-07-15 21:39:59.532705] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.532709] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.532712] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e0aec0) 00:24:09.861 [2024-07-15 21:39:59.532719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-07-15 21:39:59.532729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8dfc0, cid 0, qid 0 00:24:09.861 [2024-07-15 21:39:59.532997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.861 [2024-07-15 21:39:59.533003] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.861 [2024-07-15 21:39:59.533006] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.533010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8dfc0) on tqpair=0x1e0aec0 00:24:09.861 [2024-07-15 21:39:59.533015] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:09.861 [2024-07-15 21:39:59.533024] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.533028] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.533031] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e0aec0) 00:24:09.861 [2024-07-15 21:39:59.533038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-07-15 21:39:59.533048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8dfc0, cid 0, qid 0 00:24:09.861 [2024-07-15 21:39:59.533274] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.861 [2024-07-15 21:39:59.533280] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.861 [2024-07-15 21:39:59.533284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.533287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8dfc0) on tqpair=0x1e0aec0 00:24:09.861 [2024-07-15 21:39:59.533292] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:09.861 [2024-07-15 21:39:59.533296] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:09.861 [2024-07-15 21:39:59.533306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:09.861 [2024-07-15 21:39:59.533411] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:09.861 [2024-07-15 21:39:59.533415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:09.861 [2024-07-15 21:39:59.533422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.533426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.533430] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e0aec0) 00:24:09.861 [2024-07-15 21:39:59.533436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-07-15 21:39:59.533447] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8dfc0, cid 0, qid 0 00:24:09.861 [2024-07-15 21:39:59.533643] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.861 [2024-07-15 21:39:59.533649] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.861 [2024-07-15 21:39:59.533652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.533656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8dfc0) on tqpair=0x1e0aec0 00:24:09.861 [2024-07-15 21:39:59.533661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:09.861 [2024-07-15 21:39:59.533670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.533674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.533677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e0aec0) 00:24:09.861 [2024-07-15 21:39:59.533684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-07-15 21:39:59.533693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8dfc0, cid 0, qid 0 00:24:09.861 [2024-07-15 21:39:59.533882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.861 [2024-07-15 21:39:59.533888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.861 [2024-07-15 21:39:59.533891] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.533895] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8dfc0) on tqpair=0x1e0aec0 00:24:09.861 [2024-07-15 21:39:59.533899] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:09.861 [2024-07-15 21:39:59.533904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:09.861 [2024-07-15 21:39:59.533911] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:09.861 [2024-07-15 21:39:59.533919] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:09.861 [2024-07-15 21:39:59.533927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.861 [2024-07-15 21:39:59.533931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e0aec0) 00:24:09.861 [2024-07-15 21:39:59.533938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.861 [2024-07-15 21:39:59.533948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8dfc0, cid 0, qid 0 00:24:09.861 [2024-07-15 21:39:59.534204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.861 [2024-07-15 21:39:59.534211] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.861 [2024-07-15 21:39:59.534217] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.534221] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e0aec0): datao=0, datal=4096, cccid=0 00:24:09.862 [2024-07-15 21:39:59.534225] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8dfc0) on tqpair(0x1e0aec0): expected_datao=0, payload_size=4096 00:24:09.862 [2024-07-15 21:39:59.534230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.534277] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.534282] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575293] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.862 [2024-07-15 21:39:59.575303] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.862 [2024-07-15 21:39:59.575307] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8dfc0) on tqpair=0x1e0aec0 00:24:09.862 [2024-07-15 21:39:59.575318] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:09.862 [2024-07-15 21:39:59.575327] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:09.862 [2024-07-15 21:39:59.575332] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:09.862 [2024-07-15 21:39:59.575336] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:09.862 [2024-07-15 21:39:59.575340] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:09.862 [2024-07-15 21:39:59.575345] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:09.862 [2024-07-15 21:39:59.575353] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:09.862 [2024-07-15 21:39:59.575360] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575367] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e0aec0) 00:24:09.862 [2024-07-15 21:39:59.575375] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:09.862 [2024-07-15 21:39:59.575387] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8dfc0, cid 0, qid 0 00:24:09.862 [2024-07-15 21:39:59.575544] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.862 [2024-07-15 21:39:59.575550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.862 [2024-07-15 21:39:59.575554] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8dfc0) on tqpair=0x1e0aec0 00:24:09.862 [2024-07-15 21:39:59.575564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575568] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e0aec0) 00:24:09.862 [2024-07-15 21:39:59.575577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.862 [2024-07-15 21:39:59.575583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e0aec0) 00:24:09.862 [2024-07-15 21:39:59.575596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.862 [2024-07-15 21:39:59.575605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e0aec0) 00:24:09.862 [2024-07-15 21:39:59.575618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.862 [2024-07-15 21:39:59.575624] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575627] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575631] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e0aec0) 00:24:09.862 [2024-07-15 21:39:59.575636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.862 [2024-07-15 21:39:59.575641] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:09.862 [2024-07-15 21:39:59.575650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:09.862 [2024-07-15 21:39:59.575657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e0aec0) 00:24:09.862 [2024-07-15 21:39:59.575667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.862 [2024-07-15 21:39:59.575679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8dfc0, cid 0, qid 0 00:24:09.862 [2024-07-15 21:39:59.575684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e140, cid 1, qid 0 00:24:09.862 [2024-07-15 21:39:59.575689] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e2c0, cid 2, qid 0 00:24:09.862 [2024-07-15 21:39:59.575693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e440, cid 3, qid 0 00:24:09.862 [2024-07-15 21:39:59.575698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e5c0, cid 4, qid 0 00:24:09.862 [2024-07-15 21:39:59.575909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.862 [2024-07-15 21:39:59.575915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.862 [2024-07-15 21:39:59.575918] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e5c0) on tqpair=0x1e0aec0 00:24:09.862 [2024-07-15 21:39:59.575927] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:09.862 [2024-07-15 21:39:59.575932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:09.862 [2024-07-15 21:39:59.575940] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:09.862 [2024-07-15 21:39:59.575946] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:09.862 [2024-07-15 21:39:59.575952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575955] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.575959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e0aec0) 00:24:09.862 [2024-07-15 21:39:59.575965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:09.862 [2024-07-15 21:39:59.575975] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e5c0, cid 4, qid 0 00:24:09.862 [2024-07-15 21:39:59.580128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.862 [2024-07-15 21:39:59.580136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.862 [2024-07-15 21:39:59.580142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.862 [2024-07-15 21:39:59.580146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e5c0) on tqpair=0x1e0aec0 00:24:09.863 [2024-07-15 21:39:59.580211] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:09.863 [2024-07-15 21:39:59.580221] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:09.863 [2024-07-15 21:39:59.580228] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.863 [2024-07-15 21:39:59.580232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e0aec0) 00:24:09.863 [2024-07-15 21:39:59.580238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.863 [2024-07-15 21:39:59.580251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e5c0, cid 4, qid 0 00:24:09.863 [2024-07-15 21:39:59.580452] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.863 [2024-07-15 21:39:59.580459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.863 [2024-07-15 21:39:59.580462] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.863 [2024-07-15 21:39:59.580466] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e0aec0): datao=0, datal=4096, cccid=4 00:24:09.863 [2024-07-15 21:39:59.580470] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8e5c0) on tqpair(0x1e0aec0): expected_datao=0, payload_size=4096 00:24:09.863 [2024-07-15 21:39:59.580474] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.863 [2024-07-15 21:39:59.580481] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.863 [2024-07-15 21:39:59.580485] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.863 [2024-07-15 21:39:59.580679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.863 [2024-07-15 21:39:59.580685] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.863 [2024-07-15 21:39:59.580689] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.863 [2024-07-15 21:39:59.580692] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e5c0) on tqpair=0x1e0aec0 00:24:09.863 [2024-07-15 21:39:59.580700] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:09.863 [2024-07-15 21:39:59.580710] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:09.863 [2024-07-15 21:39:59.580718] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:09.863 [2024-07-15 21:39:59.580725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.863 [2024-07-15 21:39:59.580729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e0aec0) 00:24:09.863 [2024-07-15 21:39:59.580735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.863 [2024-07-15 21:39:59.580746] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e5c0, cid 4, qid 0 00:24:09.863 [2024-07-15 21:39:59.580986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.863 [2024-07-15 21:39:59.580992] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.863 [2024-07-15 21:39:59.580995] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.863 [2024-07-15 21:39:59.580999] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e0aec0): datao=0, datal=4096, cccid=4 00:24:09.863 [2024-07-15 21:39:59.581003] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8e5c0) on tqpair(0x1e0aec0): expected_datao=0, payload_size=4096 00:24:09.863 [2024-07-15 21:39:59.581007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.863 [2024-07-15 21:39:59.581014] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.863 [2024-07-15 21:39:59.581020] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.863 [2024-07-15 21:39:59.581263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.863 [2024-07-15 21:39:59.581270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.863 [2024-07-15 21:39:59.581273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.863 [2024-07-15 21:39:59.581277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e5c0) on tqpair=0x1e0aec0 00:24:09.863 [2024-07-15 21:39:59.581288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:09.863 [2024-07-15 21:39:59.581298] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:09.863 [2024-07-15 21:39:59.581305] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.863 [2024-07-15 21:39:59.581309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e0aec0) 00:24:09.863 [2024-07-15 21:39:59.581315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.863 [2024-07-15 21:39:59.581326] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e5c0, cid 4, qid 0 00:24:09.863 [2024-07-15 21:39:59.581553] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.863 [2024-07-15 21:39:59.581560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.863 [2024-07-15 21:39:59.581563] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.863 [2024-07-15 21:39:59.581566] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e0aec0): datao=0, datal=4096, cccid=4 00:24:09.863 [2024-07-15 21:39:59.581571] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8e5c0) on tqpair(0x1e0aec0): expected_datao=0, payload_size=4096 00:24:09.863 [2024-07-15 21:39:59.581575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.581581] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.581585] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.581781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.864 [2024-07-15 21:39:59.581788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.864 [2024-07-15 21:39:59.581791] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.581795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e5c0) on tqpair=0x1e0aec0 00:24:09.864 [2024-07-15 21:39:59.581801] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:09.864 [2024-07-15 21:39:59.581809] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:09.864 [2024-07-15 21:39:59.581819] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:09.864 [2024-07-15 21:39:59.581825] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:09.864 [2024-07-15 21:39:59.581830] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:09.864 [2024-07-15 21:39:59.581835] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:09.864 [2024-07-15 21:39:59.581840] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:09.864 [2024-07-15 21:39:59.581844] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:09.864 [2024-07-15 21:39:59.581849] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:09.864 [2024-07-15 21:39:59.581864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.581868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e0aec0) 00:24:09.864 [2024-07-15 21:39:59.581875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.864 [2024-07-15 21:39:59.581881] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.581885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.581888] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e0aec0) 00:24:09.864 [2024-07-15 21:39:59.581894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.864 [2024-07-15 21:39:59.581908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e5c0, cid 4, qid 0 00:24:09.864 [2024-07-15 21:39:59.581913] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e740, cid 5, qid 0 00:24:09.864 [2024-07-15 21:39:59.582104] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.864 [2024-07-15 21:39:59.582110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.864 [2024-07-15 21:39:59.582114] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.582117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e5c0) on tqpair=0x1e0aec0 00:24:09.864 [2024-07-15 21:39:59.582129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.864 [2024-07-15 21:39:59.582135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.864 [2024-07-15 21:39:59.582138] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.582142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e740) on tqpair=0x1e0aec0 00:24:09.864 [2024-07-15 21:39:59.582150] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.582154] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e0aec0) 00:24:09.864 [2024-07-15 21:39:59.582160] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.864 [2024-07-15 21:39:59.582171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e740, cid 5, qid 0 00:24:09.864 [2024-07-15 21:39:59.582348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.864 [2024-07-15 21:39:59.582355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.864 [2024-07-15 21:39:59.582358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.582362] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e740) on tqpair=0x1e0aec0 00:24:09.864 [2024-07-15 21:39:59.582370] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.582374] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e0aec0) 00:24:09.864 [2024-07-15 21:39:59.582380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.864 [2024-07-15 21:39:59.582390] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e740, cid 5, qid 0 00:24:09.864 [2024-07-15 21:39:59.582585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.864 [2024-07-15 21:39:59.582591] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.864 [2024-07-15 21:39:59.582594] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.582598] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e740) on tqpair=0x1e0aec0 00:24:09.864 [2024-07-15 21:39:59.582607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.582610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e0aec0) 00:24:09.864 [2024-07-15 21:39:59.582619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.864 [2024-07-15 21:39:59.582629] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e740, cid 5, qid 0 00:24:09.864 [2024-07-15 21:39:59.582819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.864 [2024-07-15 21:39:59.582825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.864 [2024-07-15 21:39:59.582829] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.582833] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e740) on tqpair=0x1e0aec0 00:24:09.864 [2024-07-15 21:39:59.582846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.582850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e0aec0) 00:24:09.864 [2024-07-15 21:39:59.582857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.864 [2024-07-15 21:39:59.582864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.582867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e0aec0) 00:24:09.864 [2024-07-15 21:39:59.582874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.864 [2024-07-15 21:39:59.582881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.582884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1e0aec0) 00:24:09.864 [2024-07-15 21:39:59.582890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.864 [2024-07-15 21:39:59.582897] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.864 [2024-07-15 21:39:59.582901] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e0aec0) 00:24:09.864 [2024-07-15 21:39:59.582907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.864 [2024-07-15 21:39:59.582918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e740, cid 5, qid 0 00:24:09.864 [2024-07-15 21:39:59.582923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e5c0, cid 4, qid 0 00:24:09.864 [2024-07-15 21:39:59.582928] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e8c0, cid 6, qid 0 00:24:09.865 [2024-07-15 21:39:59.582933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ea40, cid 7, qid 0 00:24:09.865 [2024-07-15 21:39:59.583309] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.865 [2024-07-15 21:39:59.583315] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.865 [2024-07-15 21:39:59.583319] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583322] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e0aec0): datao=0, datal=8192, cccid=5 00:24:09.865 [2024-07-15 21:39:59.583327] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8e740) on tqpair(0x1e0aec0): expected_datao=0, payload_size=8192 00:24:09.865 [2024-07-15 21:39:59.583331] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583531] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583535] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.865 [2024-07-15 21:39:59.583546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.865 [2024-07-15 21:39:59.583550] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583553] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e0aec0): datao=0, datal=512, cccid=4 00:24:09.865 [2024-07-15 21:39:59.583559] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8e5c0) on tqpair(0x1e0aec0): expected_datao=0, payload_size=512 00:24:09.865 [2024-07-15 21:39:59.583564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583570] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583573] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583579] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.865 [2024-07-15 21:39:59.583584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.865 [2024-07-15 21:39:59.583588] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583591] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e0aec0): datao=0, datal=512, cccid=6 00:24:09.865 [2024-07-15 21:39:59.583595] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8e8c0) on tqpair(0x1e0aec0): expected_datao=0, payload_size=512 00:24:09.865 [2024-07-15 21:39:59.583599] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583606] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583609] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.865 [2024-07-15 21:39:59.583620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.865 [2024-07-15 21:39:59.583623] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583627] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e0aec0): datao=0, datal=4096, cccid=7 00:24:09.865 [2024-07-15 21:39:59.583631] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e8ea40) on tqpair(0x1e0aec0): expected_datao=0, payload_size=4096 00:24:09.865 [2024-07-15 21:39:59.583635] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583642] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583645] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.865 [2024-07-15 21:39:59.583767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.865 [2024-07-15 21:39:59.583771] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583774] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e740) on tqpair=0x1e0aec0 00:24:09.865 [2024-07-15 21:39:59.583786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.865 [2024-07-15 21:39:59.583792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.865 [2024-07-15 21:39:59.583795] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e5c0) on tqpair=0x1e0aec0 00:24:09.865 [2024-07-15 21:39:59.583809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.865 [2024-07-15 21:39:59.583814] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.865 [2024-07-15 21:39:59.583818] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583821] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e8c0) on tqpair=0x1e0aec0 00:24:09.865 [2024-07-15 21:39:59.583828] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.865 [2024-07-15 21:39:59.583834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.865 [2024-07-15 21:39:59.583837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.865 [2024-07-15 21:39:59.583841] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8ea40) on tqpair=0x1e0aec0 00:24:09.865 ===================================================== 00:24:09.865 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:09.865 ===================================================== 00:24:09.865 Controller Capabilities/Features 00:24:09.865 ================================ 00:24:09.865 Vendor ID: 8086 00:24:09.865 Subsystem Vendor ID: 8086 00:24:09.865 Serial Number: SPDK00000000000001 00:24:09.865 Model Number: SPDK bdev Controller 00:24:09.865 Firmware Version: 24.09 00:24:09.865 Recommended Arb Burst: 6 00:24:09.865 IEEE OUI Identifier: e4 d2 5c 00:24:09.865 Multi-path I/O 00:24:09.865 May have multiple subsystem ports: Yes 00:24:09.865 May have multiple controllers: Yes 00:24:09.865 Associated with SR-IOV VF: No 00:24:09.865 Max Data Transfer Size: 131072 00:24:09.865 Max Number of Namespaces: 32 00:24:09.865 Max Number of I/O Queues: 127 00:24:09.865 NVMe Specification Version (VS): 1.3 00:24:09.865 NVMe Specification Version (Identify): 1.3 00:24:09.865 Maximum Queue Entries: 128 00:24:09.865 Contiguous Queues Required: Yes 00:24:09.865 Arbitration Mechanisms Supported 00:24:09.865 Weighted Round Robin: Not Supported 00:24:09.865 Vendor Specific: Not Supported 00:24:09.865 Reset Timeout: 15000 ms 00:24:09.865 Doorbell Stride: 4 bytes 00:24:09.865 NVM Subsystem Reset: Not Supported 00:24:09.865 Command Sets Supported 00:24:09.865 NVM Command Set: Supported 00:24:09.865 Boot Partition: Not Supported 00:24:09.865 Memory Page Size Minimum: 4096 bytes 00:24:09.865 Memory Page Size Maximum: 4096 bytes 00:24:09.865 Persistent Memory Region: Not Supported 00:24:09.865 Optional Asynchronous Events Supported 00:24:09.865 Namespace Attribute Notices: Supported 00:24:09.865 Firmware Activation Notices: Not Supported 00:24:09.865 ANA Change Notices: Not Supported 00:24:09.865 PLE Aggregate Log Change Notices: Not Supported 00:24:09.865 LBA Status Info Alert Notices: Not Supported 00:24:09.865 EGE Aggregate Log Change Notices: Not Supported 00:24:09.865 Normal NVM Subsystem Shutdown event: Not Supported 00:24:09.866 Zone Descriptor Change Notices: Not Supported 00:24:09.866 Discovery Log Change Notices: Not Supported 00:24:09.866 Controller Attributes 00:24:09.866 128-bit Host Identifier: Supported 00:24:09.866 Non-Operational Permissive Mode: Not Supported 00:24:09.866 NVM Sets: Not Supported 00:24:09.866 Read Recovery Levels: Not Supported 00:24:09.866 Endurance Groups: Not Supported 00:24:09.866 Predictable Latency Mode: Not Supported 00:24:09.866 Traffic Based Keep ALive: Not Supported 00:24:09.866 Namespace Granularity: Not Supported 00:24:09.866 SQ Associations: Not Supported 00:24:09.866 UUID List: Not Supported 00:24:09.866 Multi-Domain Subsystem: Not Supported 00:24:09.866 Fixed Capacity Management: Not Supported 00:24:09.866 Variable Capacity Management: Not Supported 00:24:09.866 Delete Endurance Group: Not Supported 00:24:09.866 Delete NVM Set: Not Supported 00:24:09.866 Extended LBA Formats Supported: Not Supported 00:24:09.866 Flexible Data Placement Supported: Not Supported 00:24:09.866 00:24:09.866 Controller Memory Buffer Support 00:24:09.866 ================================ 00:24:09.866 Supported: No 00:24:09.866 00:24:09.866 Persistent Memory Region Support 00:24:09.866 ================================ 00:24:09.866 Supported: No 00:24:09.866 00:24:09.866 Admin Command Set Attributes 00:24:09.866 ============================ 00:24:09.866 Security Send/Receive: Not Supported 00:24:09.866 Format NVM: Not Supported 00:24:09.866 Firmware Activate/Download: Not Supported 00:24:09.866 Namespace Management: Not Supported 00:24:09.866 Device Self-Test: Not Supported 00:24:09.866 Directives: Not Supported 00:24:09.866 NVMe-MI: Not Supported 00:24:09.866 Virtualization Management: Not Supported 00:24:09.866 Doorbell Buffer Config: Not Supported 00:24:09.866 Get LBA Status Capability: Not Supported 00:24:09.866 Command & Feature Lockdown Capability: Not Supported 00:24:09.866 Abort Command Limit: 4 00:24:09.866 Async Event Request Limit: 4 00:24:09.866 Number of Firmware Slots: N/A 00:24:09.866 Firmware Slot 1 Read-Only: N/A 00:24:09.866 Firmware Activation Without Reset: N/A 00:24:09.866 Multiple Update Detection Support: N/A 00:24:09.866 Firmware Update Granularity: No Information Provided 00:24:09.866 Per-Namespace SMART Log: No 00:24:09.866 Asymmetric Namespace Access Log Page: Not Supported 00:24:09.866 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:09.866 Command Effects Log Page: Supported 00:24:09.866 Get Log Page Extended Data: Supported 00:24:09.866 Telemetry Log Pages: Not Supported 00:24:09.866 Persistent Event Log Pages: Not Supported 00:24:09.866 Supported Log Pages Log Page: May Support 00:24:09.866 Commands Supported & Effects Log Page: Not Supported 00:24:09.866 Feature Identifiers & Effects Log Page:May Support 00:24:09.866 NVMe-MI Commands & Effects Log Page: May Support 00:24:09.866 Data Area 4 for Telemetry Log: Not Supported 00:24:09.866 Error Log Page Entries Supported: 128 00:24:09.866 Keep Alive: Supported 00:24:09.866 Keep Alive Granularity: 10000 ms 00:24:09.866 00:24:09.866 NVM Command Set Attributes 00:24:09.866 ========================== 00:24:09.866 Submission Queue Entry Size 00:24:09.866 Max: 64 00:24:09.866 Min: 64 00:24:09.866 Completion Queue Entry Size 00:24:09.866 Max: 16 00:24:09.866 Min: 16 00:24:09.866 Number of Namespaces: 32 00:24:09.866 Compare Command: Supported 00:24:09.866 Write Uncorrectable Command: Not Supported 00:24:09.866 Dataset Management Command: Supported 00:24:09.866 Write Zeroes Command: Supported 00:24:09.866 Set Features Save Field: Not Supported 00:24:09.866 Reservations: Supported 00:24:09.866 Timestamp: Not Supported 00:24:09.866 Copy: Supported 00:24:09.866 Volatile Write Cache: Present 00:24:09.866 Atomic Write Unit (Normal): 1 00:24:09.866 Atomic Write Unit (PFail): 1 00:24:09.866 Atomic Compare & Write Unit: 1 00:24:09.866 Fused Compare & Write: Supported 00:24:09.866 Scatter-Gather List 00:24:09.866 SGL Command Set: Supported 00:24:09.866 SGL Keyed: Supported 00:24:09.866 SGL Bit Bucket Descriptor: Not Supported 00:24:09.866 SGL Metadata Pointer: Not Supported 00:24:09.866 Oversized SGL: Not Supported 00:24:09.866 SGL Metadata Address: Not Supported 00:24:09.866 SGL Offset: Supported 00:24:09.866 Transport SGL Data Block: Not Supported 00:24:09.866 Replay Protected Memory Block: Not Supported 00:24:09.866 00:24:09.866 Firmware Slot Information 00:24:09.866 ========================= 00:24:09.866 Active slot: 1 00:24:09.866 Slot 1 Firmware Revision: 24.09 00:24:09.866 00:24:09.866 00:24:09.866 Commands Supported and Effects 00:24:09.866 ============================== 00:24:09.866 Admin Commands 00:24:09.866 -------------- 00:24:09.866 Get Log Page (02h): Supported 00:24:09.866 Identify (06h): Supported 00:24:09.866 Abort (08h): Supported 00:24:09.866 Set Features (09h): Supported 00:24:09.866 Get Features (0Ah): Supported 00:24:09.866 Asynchronous Event Request (0Ch): Supported 00:24:09.866 Keep Alive (18h): Supported 00:24:09.866 I/O Commands 00:24:09.866 ------------ 00:24:09.866 Flush (00h): Supported LBA-Change 00:24:09.866 Write (01h): Supported LBA-Change 00:24:09.866 Read (02h): Supported 00:24:09.866 Compare (05h): Supported 00:24:09.866 Write Zeroes (08h): Supported LBA-Change 00:24:09.866 Dataset Management (09h): Supported LBA-Change 00:24:09.866 Copy (19h): Supported LBA-Change 00:24:09.866 00:24:09.866 Error Log 00:24:09.866 ========= 00:24:09.866 00:24:09.866 Arbitration 00:24:09.866 =========== 00:24:09.866 Arbitration Burst: 1 00:24:09.866 00:24:09.866 Power Management 00:24:09.866 ================ 00:24:09.866 Number of Power States: 1 00:24:09.866 Current Power State: Power State #0 00:24:09.866 Power State #0: 00:24:09.866 Max Power: 0.00 W 00:24:09.866 Non-Operational State: Operational 00:24:09.866 Entry Latency: Not Reported 00:24:09.866 Exit Latency: Not Reported 00:24:09.866 Relative Read Throughput: 0 00:24:09.866 Relative Read Latency: 0 00:24:09.866 Relative Write Throughput: 0 00:24:09.866 Relative Write Latency: 0 00:24:09.866 Idle Power: Not Reported 00:24:09.866 Active Power: Not Reported 00:24:09.866 Non-Operational Permissive Mode: Not Supported 00:24:09.866 00:24:09.866 Health Information 00:24:09.866 ================== 00:24:09.866 Critical Warnings: 00:24:09.866 Available Spare Space: OK 00:24:09.866 Temperature: OK 00:24:09.866 Device Reliability: OK 00:24:09.866 Read Only: No 00:24:09.866 Volatile Memory Backup: OK 00:24:09.866 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:09.866 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:09.866 Available Spare: 0% 00:24:09.866 Available Spare Threshold: 0% 00:24:09.866 Life Percentage Used:[2024-07-15 21:39:59.583938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.866 [2024-07-15 21:39:59.583943] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e0aec0) 00:24:09.866 [2024-07-15 21:39:59.583951] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.866 [2024-07-15 21:39:59.583964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8ea40, cid 7, qid 0 00:24:09.866 [2024-07-15 21:39:59.588128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.866 [2024-07-15 21:39:59.588138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.866 [2024-07-15 21:39:59.588141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.588145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8ea40) on tqpair=0x1e0aec0 00:24:09.867 [2024-07-15 21:39:59.588178] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:09.867 [2024-07-15 21:39:59.588187] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8dfc0) on tqpair=0x1e0aec0 00:24:09.867 [2024-07-15 21:39:59.588193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.867 [2024-07-15 21:39:59.588198] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e140) on tqpair=0x1e0aec0 00:24:09.867 [2024-07-15 21:39:59.588203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.867 [2024-07-15 21:39:59.588208] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e2c0) on tqpair=0x1e0aec0 00:24:09.867 [2024-07-15 21:39:59.588212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.867 [2024-07-15 21:39:59.588217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e440) on tqpair=0x1e0aec0 00:24:09.867 [2024-07-15 21:39:59.588221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.867 [2024-07-15 21:39:59.588229] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.588233] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.588237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e0aec0) 00:24:09.867 [2024-07-15 21:39:59.588244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.867 [2024-07-15 21:39:59.588257] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e440, cid 3, qid 0 00:24:09.867 [2024-07-15 21:39:59.588483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.867 [2024-07-15 21:39:59.588489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.867 [2024-07-15 21:39:59.588493] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.588496] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e440) on tqpair=0x1e0aec0 00:24:09.867 [2024-07-15 21:39:59.588503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.588507] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.588510] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e0aec0) 00:24:09.867 [2024-07-15 21:39:59.588517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.867 [2024-07-15 21:39:59.588530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e440, cid 3, qid 0 00:24:09.867 [2024-07-15 21:39:59.588760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.867 [2024-07-15 21:39:59.588766] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.867 [2024-07-15 21:39:59.588769] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.588773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e440) on tqpair=0x1e0aec0 00:24:09.867 [2024-07-15 21:39:59.588778] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:09.867 [2024-07-15 21:39:59.588785] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:09.867 [2024-07-15 21:39:59.588794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.588798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.588802] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e0aec0) 00:24:09.867 [2024-07-15 21:39:59.588808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.867 [2024-07-15 21:39:59.588818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e440, cid 3, qid 0 00:24:09.867 [2024-07-15 21:39:59.589031] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.867 [2024-07-15 21:39:59.589037] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.867 [2024-07-15 21:39:59.589041] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.589044] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e440) on tqpair=0x1e0aec0 00:24:09.867 [2024-07-15 21:39:59.589054] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.589058] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.589061] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e0aec0) 00:24:09.867 [2024-07-15 21:39:59.589068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.867 [2024-07-15 21:39:59.589077] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e440, cid 3, qid 0 00:24:09.867 [2024-07-15 21:39:59.589372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.867 [2024-07-15 21:39:59.589379] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.867 [2024-07-15 21:39:59.589382] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.589386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e440) on tqpair=0x1e0aec0 00:24:09.867 [2024-07-15 21:39:59.589396] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.589399] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.589403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e0aec0) 00:24:09.867 [2024-07-15 21:39:59.589409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.867 [2024-07-15 21:39:59.589419] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e440, cid 3, qid 0 00:24:09.867 [2024-07-15 21:39:59.589632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.867 [2024-07-15 21:39:59.589638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.867 [2024-07-15 21:39:59.589642] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.589645] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e440) on tqpair=0x1e0aec0 00:24:09.867 [2024-07-15 21:39:59.589655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.589658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.589662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e0aec0) 00:24:09.867 [2024-07-15 21:39:59.589668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.867 [2024-07-15 21:39:59.589678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e440, cid 3, qid 0 00:24:09.867 [2024-07-15 21:39:59.589865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.867 [2024-07-15 21:39:59.589871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.867 [2024-07-15 21:39:59.589875] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.589878] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e440) on tqpair=0x1e0aec0 00:24:09.867 [2024-07-15 21:39:59.589890] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.589894] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.589897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e0aec0) 00:24:09.867 [2024-07-15 21:39:59.589904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.867 [2024-07-15 21:39:59.589913] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e440, cid 3, qid 0 00:24:09.867 [2024-07-15 21:39:59.590088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.867 [2024-07-15 21:39:59.590094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.867 [2024-07-15 21:39:59.590098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.590101] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e440) on tqpair=0x1e0aec0 00:24:09.867 [2024-07-15 21:39:59.590110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.590114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.590118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e0aec0) 00:24:09.867 [2024-07-15 21:39:59.590128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.867 [2024-07-15 21:39:59.590138] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e440, cid 3, qid 0 00:24:09.867 [2024-07-15 21:39:59.590413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.867 [2024-07-15 21:39:59.590419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.867 [2024-07-15 21:39:59.590423] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.590426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e440) on tqpair=0x1e0aec0 00:24:09.867 [2024-07-15 21:39:59.590436] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.590440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.590443] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e0aec0) 00:24:09.867 [2024-07-15 21:39:59.590450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.867 [2024-07-15 21:39:59.590459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e440, cid 3, qid 0 00:24:09.867 [2024-07-15 21:39:59.590672] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.867 [2024-07-15 21:39:59.590678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.867 [2024-07-15 21:39:59.590682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.590685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e440) on tqpair=0x1e0aec0 00:24:09.867 [2024-07-15 21:39:59.590695] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.590698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.590702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e0aec0) 00:24:09.867 [2024-07-15 21:39:59.590708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.867 [2024-07-15 21:39:59.590718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e440, cid 3, qid 0 00:24:09.867 [2024-07-15 21:39:59.590988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.867 [2024-07-15 21:39:59.590994] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.867 [2024-07-15 21:39:59.590998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.591001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e440) on tqpair=0x1e0aec0 00:24:09.867 [2024-07-15 21:39:59.591011] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.591017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.591020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e0aec0) 00:24:09.867 [2024-07-15 21:39:59.591027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.867 [2024-07-15 21:39:59.591036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e8e440, cid 3, qid 0 00:24:09.867 [2024-07-15 21:39:59.595128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.867 [2024-07-15 21:39:59.595136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.867 [2024-07-15 21:39:59.595139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.867 [2024-07-15 21:39:59.595143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e8e440) on tqpair=0x1e0aec0 00:24:09.867 [2024-07-15 21:39:59.595151] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:09.868 0% 00:24:09.868 Data Units Read: 0 00:24:09.868 Data Units Written: 0 00:24:09.868 Host Read Commands: 0 00:24:09.868 Host Write Commands: 0 00:24:09.868 Controller Busy Time: 0 minutes 00:24:09.868 Power Cycles: 0 00:24:09.868 Power On Hours: 0 hours 00:24:09.868 Unsafe Shutdowns: 0 00:24:09.868 Unrecoverable Media Errors: 0 00:24:09.868 Lifetime Error Log Entries: 0 00:24:09.868 Warning Temperature Time: 0 minutes 00:24:09.868 Critical Temperature Time: 0 minutes 00:24:09.868 00:24:09.868 Number of Queues 00:24:09.868 ================ 00:24:09.868 Number of I/O Submission Queues: 127 00:24:09.868 Number of I/O Completion Queues: 127 00:24:09.868 00:24:09.868 Active Namespaces 00:24:09.868 ================= 00:24:09.868 Namespace ID:1 00:24:09.868 Error Recovery Timeout: Unlimited 00:24:09.868 Command Set Identifier: NVM (00h) 00:24:09.868 Deallocate: Supported 00:24:09.868 Deallocated/Unwritten Error: Not Supported 00:24:09.868 Deallocated Read Value: Unknown 00:24:09.868 Deallocate in Write Zeroes: Not Supported 00:24:09.868 Deallocated Guard Field: 0xFFFF 00:24:09.868 Flush: Supported 00:24:09.868 Reservation: Supported 00:24:09.868 Namespace Sharing Capabilities: Multiple Controllers 00:24:09.868 Size (in LBAs): 131072 (0GiB) 00:24:09.868 Capacity (in LBAs): 131072 (0GiB) 00:24:09.868 Utilization (in LBAs): 131072 (0GiB) 00:24:09.868 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:09.868 EUI64: ABCDEF0123456789 00:24:09.868 UUID: 4b9f2048-9477-4498-b014-bc83f73db2c3 00:24:09.868 Thin Provisioning: Not Supported 00:24:09.868 Per-NS Atomic Units: Yes 00:24:09.868 Atomic Boundary Size (Normal): 0 00:24:09.868 Atomic Boundary Size (PFail): 0 00:24:09.868 Atomic Boundary Offset: 0 00:24:09.868 Maximum Single Source Range Length: 65535 00:24:09.868 Maximum Copy Length: 65535 00:24:09.868 Maximum Source Range Count: 1 00:24:09.868 NGUID/EUI64 Never Reused: No 00:24:09.868 Namespace Write Protected: No 00:24:09.868 Number of LBA Formats: 1 00:24:09.868 Current LBA Format: LBA Format #00 00:24:09.868 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:09.868 00:24:09.868 21:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:09.868 21:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.868 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.868 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.868 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.868 21:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:09.868 21:39:59 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:09.868 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:09.868 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:09.868 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:09.868 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:09.868 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:09.868 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:09.868 rmmod nvme_tcp 00:24:09.868 rmmod nvme_fabrics 00:24:10.129 rmmod nvme_keyring 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2274051 ']' 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2274051 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2274051 ']' 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2274051 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2274051 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2274051' 00:24:10.129 killing process with pid 2274051 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2274051 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2274051 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.129 21:39:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.681 21:40:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:12.681 00:24:12.681 real 0m10.901s 00:24:12.681 user 0m7.785s 00:24:12.681 sys 0m5.649s 00:24:12.681 21:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.681 21:40:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.681 ************************************ 00:24:12.681 END TEST nvmf_identify 00:24:12.681 ************************************ 00:24:12.681 21:40:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:12.681 21:40:02 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:12.681 21:40:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:12.681 21:40:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.681 21:40:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:12.681 ************************************ 00:24:12.681 START TEST nvmf_perf 00:24:12.681 ************************************ 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:12.681 * Looking for test storage... 00:24:12.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.681 21:40:02 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:12.682 21:40:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:19.284 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:19.284 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:19.284 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:19.284 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:19.284 21:40:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.284 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.284 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.284 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:19.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:24:19.284 00:24:19.284 --- 10.0.0.2 ping statistics --- 00:24:19.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.284 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:24:19.284 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:24:19.284 00:24:19.284 --- 10.0.0.1 ping statistics --- 00:24:19.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.284 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:24:19.284 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.284 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:19.284 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:19.284 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.284 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:19.284 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:19.284 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.284 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:19.284 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:19.545 21:40:09 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:19.545 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:19.545 21:40:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:19.545 21:40:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.545 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2278395 00:24:19.545 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2278395 00:24:19.545 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:19.545 21:40:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2278395 ']' 00:24:19.545 21:40:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.545 21:40:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.545 21:40:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.545 21:40:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.545 21:40:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.545 [2024-07-15 21:40:09.186751] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:24:19.545 [2024-07-15 21:40:09.186818] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.545 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.545 [2024-07-15 21:40:09.260021] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:19.545 [2024-07-15 21:40:09.334271] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.545 [2024-07-15 21:40:09.334312] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.545 [2024-07-15 21:40:09.334320] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.545 [2024-07-15 21:40:09.334326] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.545 [2024-07-15 21:40:09.334332] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.545 [2024-07-15 21:40:09.334470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.545 [2024-07-15 21:40:09.334591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.545 [2024-07-15 21:40:09.334750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.545 [2024-07-15 21:40:09.334751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.485 21:40:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:20.485 21:40:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:20.485 21:40:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:20.485 21:40:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:20.485 21:40:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:20.485 21:40:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.485 21:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:20.485 21:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:20.745 21:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:20.745 21:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:21.006 21:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:21.006 21:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:21.267 21:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:21.267 21:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:21.267 21:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:21.267 21:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:21.267 21:40:10 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:21.267 [2024-07-15 21:40:10.977263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.267 21:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:21.528 21:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:21.528 21:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:21.789 21:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:21.789 21:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:21.789 21:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.049 [2024-07-15 21:40:11.659788] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.049 21:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:22.310 21:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:22.310 21:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:22.310 21:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:22.310 21:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:23.691 Initializing NVMe Controllers 00:24:23.691 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:23.691 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:23.691 Initialization complete. Launching workers. 00:24:23.691 ======================================================== 00:24:23.691 Latency(us) 00:24:23.691 Device Information : IOPS MiB/s Average min max 00:24:23.691 PCIE (0000:65:00.0) NSID 1 from core 0: 79530.00 310.66 401.77 14.77 5207.22 00:24:23.691 ======================================================== 00:24:23.691 Total : 79530.00 310.66 401.77 14.77 5207.22 00:24:23.691 00:24:23.691 21:40:13 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.691 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.632 Initializing NVMe Controllers 00:24:24.632 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:24.632 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:24.632 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:24.632 Initialization complete. Launching workers. 00:24:24.632 ======================================================== 00:24:24.632 Latency(us) 00:24:24.632 Device Information : IOPS MiB/s Average min max 00:24:24.632 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 69.00 0.27 14761.97 484.17 45494.29 00:24:24.632 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 64.00 0.25 16164.52 7001.43 52871.49 00:24:24.632 ======================================================== 00:24:24.632 Total : 133.00 0.52 15436.88 484.17 52871.49 00:24:24.632 00:24:24.632 21:40:14 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:24.892 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.854 Initializing NVMe Controllers 00:24:25.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:25.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:25.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:25.855 Initialization complete. Launching workers. 00:24:25.855 ======================================================== 00:24:25.855 Latency(us) 00:24:25.855 Device Information : IOPS MiB/s Average min max 00:24:25.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9350.99 36.53 3433.06 559.13 7954.70 00:24:25.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3668.00 14.33 8763.86 6332.45 16317.35 00:24:25.855 ======================================================== 00:24:25.855 Total : 13018.99 50.86 4934.97 559.13 16317.35 00:24:25.855 00:24:25.855 21:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:25.855 21:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:25.855 21:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:26.115 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.660 Initializing NVMe Controllers 00:24:28.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.660 Controller IO queue size 128, less than required. 00:24:28.660 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:28.660 Controller IO queue size 128, less than required. 00:24:28.660 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:28.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:28.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:28.660 Initialization complete. Launching workers. 00:24:28.660 ======================================================== 00:24:28.660 Latency(us) 00:24:28.660 Device Information : IOPS MiB/s Average min max 00:24:28.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 868.99 217.25 151832.79 74820.12 235805.23 00:24:28.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 586.49 146.62 226911.10 62544.16 361553.59 00:24:28.660 ======================================================== 00:24:28.660 Total : 1455.49 363.87 182085.92 62544.16 361553.59 00:24:28.660 00:24:28.660 21:40:18 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:28.660 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.660 No valid NVMe controllers or AIO or URING devices found 00:24:28.660 Initializing NVMe Controllers 00:24:28.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.660 Controller IO queue size 128, less than required. 00:24:28.660 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:28.660 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:28.660 Controller IO queue size 128, less than required. 00:24:28.660 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:28.660 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:28.660 WARNING: Some requested NVMe devices were skipped 00:24:28.660 21:40:18 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:28.660 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.209 Initializing NVMe Controllers 00:24:31.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:31.209 Controller IO queue size 128, less than required. 00:24:31.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.209 Controller IO queue size 128, less than required. 00:24:31.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:31.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:31.209 Initialization complete. Launching workers. 00:24:31.209 00:24:31.209 ==================== 00:24:31.209 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:31.209 TCP transport: 00:24:31.209 polls: 34280 00:24:31.209 idle_polls: 8873 00:24:31.209 sock_completions: 25407 00:24:31.209 nvme_completions: 5727 00:24:31.209 submitted_requests: 8626 00:24:31.209 queued_requests: 1 00:24:31.209 00:24:31.209 ==================== 00:24:31.209 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:31.209 TCP transport: 00:24:31.209 polls: 35807 00:24:31.209 idle_polls: 11599 00:24:31.209 sock_completions: 24208 00:24:31.209 nvme_completions: 3651 00:24:31.209 submitted_requests: 5412 00:24:31.209 queued_requests: 1 00:24:31.209 ======================================================== 00:24:31.209 Latency(us) 00:24:31.209 Device Information : IOPS MiB/s Average min max 00:24:31.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1431.50 357.87 91448.91 49735.10 146097.72 00:24:31.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 912.50 228.12 142916.97 60682.07 192422.32 00:24:31.209 ======================================================== 00:24:31.209 Total : 2344.00 586.00 111485.00 49735.10 192422.32 00:24:31.209 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:31.209 rmmod nvme_tcp 00:24:31.209 rmmod nvme_fabrics 00:24:31.209 rmmod nvme_keyring 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2278395 ']' 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2278395 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2278395 ']' 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2278395 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:31.209 21:40:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2278395 00:24:31.470 21:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:31.470 21:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:31.470 21:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2278395' 00:24:31.470 killing process with pid 2278395 00:24:31.470 21:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2278395 00:24:31.470 21:40:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2278395 00:24:33.390 21:40:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:33.390 21:40:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:33.390 21:40:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:33.390 21:40:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:33.390 21:40:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:33.390 21:40:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.390 21:40:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.390 21:40:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.341 21:40:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:35.341 00:24:35.341 real 0m23.044s 00:24:35.341 user 0m56.178s 00:24:35.341 sys 0m7.548s 00:24:35.341 21:40:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:35.341 21:40:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:35.341 ************************************ 00:24:35.341 END TEST nvmf_perf 00:24:35.341 ************************************ 00:24:35.341 21:40:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:35.341 21:40:25 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:35.341 21:40:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:35.341 21:40:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:35.341 21:40:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:35.603 ************************************ 00:24:35.603 START TEST nvmf_fio_host 00:24:35.603 ************************************ 00:24:35.603 21:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:35.603 * Looking for test storage... 00:24:35.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:35.603 21:40:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.603 21:40:25 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.603 21:40:25 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.603 21:40:25 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.603 21:40:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.603 21:40:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:35.604 21:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:43.747 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:43.747 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:43.747 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:43.747 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.747 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:43.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:24:43.748 00:24:43.748 --- 10.0.0.2 ping statistics --- 00:24:43.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.748 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:24:43.748 00:24:43.748 --- 10.0.0.1 ping statistics --- 00:24:43.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.748 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2285253 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2285253 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2285253 ']' 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.748 21:40:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.748 [2024-07-15 21:40:32.570681] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:24:43.748 [2024-07-15 21:40:32.570746] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.748 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.748 [2024-07-15 21:40:32.642397] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:43.748 [2024-07-15 21:40:32.717607] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.748 [2024-07-15 21:40:32.717645] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.748 [2024-07-15 21:40:32.717653] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.748 [2024-07-15 21:40:32.717659] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.748 [2024-07-15 21:40:32.717665] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.748 [2024-07-15 21:40:32.717801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.748 [2024-07-15 21:40:32.717923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.748 [2024-07-15 21:40:32.718079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.748 [2024-07-15 21:40:32.718080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:43.748 21:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:43.748 21:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:24:43.748 21:40:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:43.748 [2024-07-15 21:40:33.489097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.748 21:40:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:43.748 21:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:43.748 21:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.007 21:40:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:44.007 Malloc1 00:24:44.007 21:40:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:44.267 21:40:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:44.527 21:40:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.527 [2024-07-15 21:40:34.226603] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.527 21:40:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:44.788 21:40:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:44.788 21:40:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:44.788 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:44.788 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:44.788 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:44.788 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:44.788 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.788 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:44.788 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:44.788 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:44.788 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.788 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:44.788 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:44.788 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:44.788 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:44.789 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:44.789 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.789 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:44.789 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:44.789 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:44.789 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:44.789 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:44.789 21:40:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:45.049 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:45.049 fio-3.35 00:24:45.049 Starting 1 thread 00:24:45.049 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.615 00:24:47.615 test: (groupid=0, jobs=1): err= 0: pid=2285975: Mon Jul 15 21:40:37 2024 00:24:47.615 read: IOPS=13.4k, BW=52.2MiB/s (54.8MB/s)(105MiB/2005msec) 00:24:47.615 slat (usec): min=2, max=276, avg= 2.17, stdev= 2.41 00:24:47.615 clat (usec): min=3107, max=9693, avg=5443.00, stdev=927.42 00:24:47.615 lat (usec): min=3109, max=9695, avg=5445.17, stdev=927.51 00:24:47.615 clat percentiles (usec): 00:24:47.615 | 1.00th=[ 3884], 5.00th=[ 4293], 10.00th=[ 4555], 20.00th=[ 4752], 00:24:47.615 | 30.00th=[ 4948], 40.00th=[ 5080], 50.00th=[ 5211], 60.00th=[ 5407], 00:24:47.615 | 70.00th=[ 5604], 80.00th=[ 5997], 90.00th=[ 6849], 95.00th=[ 7439], 00:24:47.615 | 99.00th=[ 8291], 99.50th=[ 8717], 99.90th=[ 9372], 99.95th=[ 9503], 00:24:47.615 | 99.99th=[ 9634] 00:24:47.615 bw ( KiB/s): min=47504, max=55992, per=100.00%, avg=53492.00, stdev=4014.58, samples=4 00:24:47.615 iops : min=11876, max=13998, avg=13373.00, stdev=1003.64, samples=4 00:24:47.615 write: IOPS=13.4k, BW=52.2MiB/s (54.7MB/s)(105MiB/2005msec); 0 zone resets 00:24:47.615 slat (usec): min=2, max=263, avg= 2.27, stdev= 1.78 00:24:47.615 clat (usec): min=2080, max=8262, avg=4070.33, stdev=677.28 00:24:47.615 lat (usec): min=2082, max=8264, avg=4072.60, stdev=677.40 00:24:47.615 clat percentiles (usec): 00:24:47.615 | 1.00th=[ 2606], 5.00th=[ 3064], 10.00th=[ 3294], 20.00th=[ 3621], 00:24:47.615 | 30.00th=[ 3785], 40.00th=[ 3916], 50.00th=[ 4047], 60.00th=[ 4146], 00:24:47.615 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5538], 00:24:47.615 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 6783], 99.95th=[ 7046], 00:24:47.615 | 99.99th=[ 8160] 00:24:47.615 bw ( KiB/s): min=48168, max=55520, per=100.00%, avg=53456.00, stdev=3531.89, samples=4 00:24:47.615 iops : min=12042, max=13880, avg=13364.00, stdev=882.97, samples=4 00:24:47.615 lat (msec) : 4=24.26%, 10=75.74% 00:24:47.615 cpu : usr=69.51%, sys=24.95%, ctx=23, majf=0, minf=7 00:24:47.615 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:47.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:47.615 issued rwts: total=26813,26788,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.615 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:47.615 00:24:47.615 Run status group 0 (all jobs): 00:24:47.615 READ: bw=52.2MiB/s (54.8MB/s), 52.2MiB/s-52.2MiB/s (54.8MB/s-54.8MB/s), io=105MiB (110MB), run=2005-2005msec 00:24:47.615 WRITE: bw=52.2MiB/s (54.7MB/s), 52.2MiB/s-52.2MiB/s (54.7MB/s-54.7MB/s), io=105MiB (110MB), run=2005-2005msec 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:47.615 21:40:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:47.883 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:47.883 fio-3.35 00:24:47.883 Starting 1 thread 00:24:47.883 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.431 00:24:50.431 test: (groupid=0, jobs=1): err= 0: pid=2286585: Mon Jul 15 21:40:39 2024 00:24:50.431 read: IOPS=8773, BW=137MiB/s (144MB/s)(276MiB/2010msec) 00:24:50.431 slat (usec): min=3, max=110, avg= 3.63, stdev= 1.70 00:24:50.431 clat (usec): min=2860, max=22285, avg=9126.42, stdev=2501.50 00:24:50.431 lat (usec): min=2863, max=22288, avg=9130.05, stdev=2501.77 00:24:50.431 clat percentiles (usec): 00:24:50.431 | 1.00th=[ 4424], 5.00th=[ 5538], 10.00th=[ 6194], 20.00th=[ 7046], 00:24:50.431 | 30.00th=[ 7635], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9372], 00:24:50.431 | 70.00th=[10159], 80.00th=[11207], 90.00th=[12649], 95.00th=[13698], 00:24:50.431 | 99.00th=[15926], 99.50th=[16319], 99.90th=[17695], 99.95th=[17957], 00:24:50.431 | 99.99th=[18482] 00:24:50.431 bw ( KiB/s): min=64992, max=74144, per=49.44%, avg=69408.00, stdev=3827.54, samples=4 00:24:50.431 iops : min= 4062, max= 4634, avg=4338.00, stdev=239.22, samples=4 00:24:50.431 write: IOPS=5064, BW=79.1MiB/s (83.0MB/s)(141MiB/1781msec); 0 zone resets 00:24:50.431 slat (usec): min=40, max=449, avg=41.33, stdev= 9.53 00:24:50.431 clat (usec): min=3146, max=19047, avg=9824.37, stdev=1838.43 00:24:50.431 lat (usec): min=3186, max=19185, avg=9865.70, stdev=1841.98 00:24:50.431 clat percentiles (usec): 00:24:50.431 | 1.00th=[ 6194], 5.00th=[ 7373], 10.00th=[ 7898], 20.00th=[ 8455], 00:24:50.431 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[ 9896], 00:24:50.431 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11994], 95.00th=[13173], 00:24:50.431 | 99.00th=[15926], 99.50th=[17171], 99.90th=[18744], 99.95th=[18744], 00:24:50.431 | 99.99th=[19006] 00:24:50.431 bw ( KiB/s): min=68064, max=77824, per=89.05%, avg=72160.00, stdev=4201.88, samples=4 00:24:50.431 iops : min= 4254, max= 4864, avg=4510.00, stdev=262.62, samples=4 00:24:50.431 lat (msec) : 4=0.45%, 10=65.77%, 20=33.78%, 50=0.01% 00:24:50.431 cpu : usr=83.23%, sys=13.19%, ctx=17, majf=0, minf=16 00:24:50.431 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:50.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:50.431 issued rwts: total=17635,9020,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.431 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:50.431 00:24:50.431 Run status group 0 (all jobs): 00:24:50.431 READ: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=276MiB (289MB), run=2010-2010msec 00:24:50.431 WRITE: bw=79.1MiB/s (83.0MB/s), 79.1MiB/s-79.1MiB/s (83.0MB/s-83.0MB/s), io=141MiB (148MB), run=1781-1781msec 00:24:50.431 21:40:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.431 21:40:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:50.431 21:40:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:50.431 21:40:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:50.431 21:40:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:50.431 21:40:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:50.431 21:40:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:50.431 rmmod nvme_tcp 00:24:50.431 rmmod nvme_fabrics 00:24:50.431 rmmod nvme_keyring 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2285253 ']' 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2285253 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2285253 ']' 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2285253 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2285253 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2285253' 00:24:50.431 killing process with pid 2285253 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2285253 00:24:50.431 21:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2285253 00:24:50.693 21:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:50.693 21:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:50.693 21:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:50.693 21:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.693 21:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:50.693 21:40:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.693 21:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.693 21:40:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.609 21:40:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:52.609 00:24:52.609 real 0m17.170s 00:24:52.609 user 1m9.018s 00:24:52.609 sys 0m7.382s 00:24:52.609 21:40:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:52.609 21:40:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.609 ************************************ 00:24:52.609 END TEST nvmf_fio_host 00:24:52.609 ************************************ 00:24:52.609 21:40:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:52.609 21:40:42 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:52.609 21:40:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:52.609 21:40:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:52.609 21:40:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:52.871 ************************************ 00:24:52.871 START TEST nvmf_failover 00:24:52.871 ************************************ 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:52.871 * Looking for test storage... 00:24:52.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.871 21:40:42 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:52.872 21:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:59.461 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:59.461 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:59.461 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.461 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:59.462 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.462 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:59.722 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:59.722 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.722 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.722 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.722 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.722 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:59.722 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.722 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.722 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.982 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:59.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:24:59.982 00:24:59.983 --- 10.0.0.2 ping statistics --- 00:24:59.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.983 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.400 ms 00:24:59.983 00:24:59.983 --- 10.0.0.1 ping statistics --- 00:24:59.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.983 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2291126 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2291126 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2291126 ']' 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:59.983 21:40:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.983 [2024-07-15 21:40:49.649617] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:24:59.983 [2024-07-15 21:40:49.649680] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.983 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.983 [2024-07-15 21:40:49.735887] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:00.243 [2024-07-15 21:40:49.828250] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.243 [2024-07-15 21:40:49.828311] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.243 [2024-07-15 21:40:49.828319] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.243 [2024-07-15 21:40:49.828326] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.243 [2024-07-15 21:40:49.828332] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.243 [2024-07-15 21:40:49.828475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.243 [2024-07-15 21:40:49.828649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.243 [2024-07-15 21:40:49.828650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:00.814 21:40:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:00.814 21:40:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:00.814 21:40:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:00.814 21:40:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:00.814 21:40:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.814 21:40:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.814 21:40:50 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:00.814 [2024-07-15 21:40:50.610803] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.075 21:40:50 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:01.075 Malloc0 00:25:01.075 21:40:50 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:01.336 21:40:50 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:01.596 21:40:51 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.596 [2024-07-15 21:40:51.287244] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.596 21:40:51 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:01.857 [2024-07-15 21:40:51.447647] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:01.857 21:40:51 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:01.857 [2024-07-15 21:40:51.612146] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:01.857 21:40:51 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2291507 00:25:01.857 21:40:51 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:01.857 21:40:51 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:01.857 21:40:51 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2291507 /var/tmp/bdevperf.sock 00:25:01.857 21:40:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2291507 ']' 00:25:01.857 21:40:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.857 21:40:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.857 21:40:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.857 21:40:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.857 21:40:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:02.807 21:40:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.807 21:40:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:02.807 21:40:52 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.127 NVMe0n1 00:25:03.127 21:40:52 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.388 00:25:03.388 21:40:53 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:03.388 21:40:53 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2291826 00:25:03.388 21:40:53 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:04.329 21:40:54 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.590 21:40:54 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:07.889 21:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:07.889 00:25:07.889 21:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:08.150 [2024-07-15 21:40:57.811671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811745] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811870] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.150 [2024-07-15 21:40:57.811892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.151 [2024-07-15 21:40:57.811896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.151 [2024-07-15 21:40:57.811901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.151 [2024-07-15 21:40:57.811905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.151 [2024-07-15 21:40:57.811909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.151 [2024-07-15 21:40:57.811913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.151 [2024-07-15 21:40:57.811918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.151 [2024-07-15 21:40:57.811922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.151 [2024-07-15 21:40:57.811927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.151 [2024-07-15 21:40:57.811931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.151 [2024-07-15 21:40:57.811935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.151 [2024-07-15 21:40:57.811939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.151 [2024-07-15 21:40:57.811943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.151 [2024-07-15 21:40:57.811948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.151 [2024-07-15 21:40:57.811953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.151 [2024-07-15 21:40:57.811957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155620 is same with the state(5) to be set 00:25:08.151 21:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:11.450 21:41:00 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:11.450 [2024-07-15 21:41:00.982315] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.450 21:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:12.392 21:41:02 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:12.392 [2024-07-15 21:41:02.159427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.392 [2024-07-15 21:41:02.159465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.392 [2024-07-15 21:41:02.159471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.392 [2024-07-15 21:41:02.159475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.392 [2024-07-15 21:41:02.159480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.392 [2024-07-15 21:41:02.159484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.392 [2024-07-15 21:41:02.159488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.392 [2024-07-15 21:41:02.159493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.392 [2024-07-15 21:41:02.159497] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.392 [2024-07-15 21:41:02.159501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.392 [2024-07-15 21:41:02.159506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.392 [2024-07-15 21:41:02.159510] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.392 [2024-07-15 21:41:02.159514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.392 [2024-07-15 21:41:02.159519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.392 [2024-07-15 21:41:02.159523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.392 [2024-07-15 21:41:02.159527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159593] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159666] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159706] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159733] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159738] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 [2024-07-15 21:41:02.159747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1155e70 is same with the state(5) to be set 00:25:12.393 21:41:02 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2291826 00:25:18.986 0 00:25:18.986 21:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2291507 00:25:18.986 21:41:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2291507 ']' 00:25:18.986 21:41:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2291507 00:25:18.986 21:41:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:18.986 21:41:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:18.986 21:41:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2291507 00:25:18.986 21:41:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:18.986 21:41:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:18.986 21:41:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2291507' 00:25:18.986 killing process with pid 2291507 00:25:18.986 21:41:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2291507 00:25:18.986 21:41:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2291507 00:25:18.986 21:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:18.986 [2024-07-15 21:40:51.689226] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:25:18.986 [2024-07-15 21:40:51.689285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291507 ] 00:25:18.986 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.986 [2024-07-15 21:40:51.748144] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.986 [2024-07-15 21:40:51.812299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.986 Running I/O for 15 seconds... 00:25:18.986 [2024-07-15 21:40:54.201824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.986 [2024-07-15 21:40:54.201869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.986 [2024-07-15 21:40:54.201887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.986 [2024-07-15 21:40:54.201895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.986 [2024-07-15 21:40:54.201905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.986 [2024-07-15 21:40:54.201912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.986 [2024-07-15 21:40:54.201922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.986 [2024-07-15 21:40:54.201929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.986 [2024-07-15 21:40:54.201938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.986 [2024-07-15 21:40:54.201945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.986 [2024-07-15 21:40:54.201955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.986 [2024-07-15 21:40:54.201962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.986 [2024-07-15 21:40:54.201970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.986 [2024-07-15 21:40:54.201977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.986 [2024-07-15 21:40:54.201987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.986 [2024-07-15 21:40:54.201993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.986 [2024-07-15 21:40:54.202003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.986 [2024-07-15 21:40:54.202010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.986 [2024-07-15 21:40:54.202019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.986 [2024-07-15 21:40:54.202026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.986 [2024-07-15 21:40:54.202035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.986 [2024-07-15 21:40:54.202042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.986 [2024-07-15 21:40:54.202057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.986 [2024-07-15 21:40:54.202065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.986 [2024-07-15 21:40:54.202074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.986 [2024-07-15 21:40:54.202081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.987 [2024-07-15 21:40:54.202539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.987 [2024-07-15 21:40:54.202555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.987 [2024-07-15 21:40:54.202571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.987 [2024-07-15 21:40:54.202587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.987 [2024-07-15 21:40:54.202603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.987 [2024-07-15 21:40:54.202620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.987 [2024-07-15 21:40:54.202636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.987 [2024-07-15 21:40:54.202652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.987 [2024-07-15 21:40:54.202670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.987 [2024-07-15 21:40:54.202686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.987 [2024-07-15 21:40:54.202702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.987 [2024-07-15 21:40:54.202717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.987 [2024-07-15 21:40:54.202733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.987 [2024-07-15 21:40:54.202749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.987 [2024-07-15 21:40:54.202765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.987 [2024-07-15 21:40:54.202774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.987 [2024-07-15 21:40:54.202781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.202790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.202797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.202806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.202812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.202822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.202829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.202838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.202845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.202854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.202860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.202869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.202877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.202887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.202894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.202903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.202909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.202918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.202925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.202935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.202942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.202951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.202958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.202967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.202974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.202983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.202990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.988 [2024-07-15 21:40:54.203460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.988 [2024-07-15 21:40:54.203469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:54.203933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.203942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe44ce0 is same with the state(5) to be set 00:25:18.989 [2024-07-15 21:40:54.203950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.989 [2024-07-15 21:40:54.203956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.989 [2024-07-15 21:40:54.203963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96368 len:8 PRP1 0x0 PRP2 0x0 00:25:18.989 [2024-07-15 21:40:54.203970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.204004] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe44ce0 was disconnected and freed. reset controller. 00:25:18.989 [2024-07-15 21:40:54.204014] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:18.989 [2024-07-15 21:40:54.204034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.989 [2024-07-15 21:40:54.204042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.204050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.989 [2024-07-15 21:40:54.204057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.204065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.989 [2024-07-15 21:40:54.204072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.204080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.989 [2024-07-15 21:40:54.204087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:54.204094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.989 [2024-07-15 21:40:54.207661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.989 [2024-07-15 21:40:54.207685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27090 (9): Bad file descriptor 00:25:18.989 [2024-07-15 21:40:54.237018] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:18.989 [2024-07-15 21:40:57.812383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:57.812420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:57.812437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:57.812450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:57.812460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:57.812467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:57.812476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:57.812483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.989 [2024-07-15 21:40:57.812493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.989 [2024-07-15 21:40:57.812500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.812991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.812999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.813008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.813014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.813023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.813030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.990 [2024-07-15 21:40:57.813040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.990 [2024-07-15 21:40:57.813047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.991 [2024-07-15 21:40:57.813612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.991 [2024-07-15 21:40:57.813732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.991 [2024-07-15 21:40:57.813739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.813748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.813755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.813763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.813770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.813779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.813786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.813795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.813802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.813810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.813817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.813826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.813833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.813842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.813848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.813857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.992 [2024-07-15 21:40:57.813865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.813874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.992 [2024-07-15 21:40:57.813882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.813890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.992 [2024-07-15 21:40:57.813897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.813906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.992 [2024-07-15 21:40:57.813913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.813922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.992 [2024-07-15 21:40:57.813929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.813938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.992 [2024-07-15 21:40:57.813944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.813953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.992 [2024-07-15 21:40:57.813960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.813969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.992 [2024-07-15 21:40:57.813976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.813985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.813992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.992 [2024-07-15 21:40:57.814347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.992 [2024-07-15 21:40:57.814363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.992 [2024-07-15 21:40:57.814379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.992 [2024-07-15 21:40:57.814394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.992 [2024-07-15 21:40:57.814410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.992 [2024-07-15 21:40:57.814419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.992 [2024-07-15 21:40:57.814426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:40:57.814435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:40:57.814442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:40:57.814452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:40:57.814458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:40:57.814479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.993 [2024-07-15 21:40:57.814487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.993 [2024-07-15 21:40:57.814493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41208 len:8 PRP1 0x0 PRP2 0x0 00:25:18.993 [2024-07-15 21:40:57.814501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:40:57.814539] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xff1f40 was disconnected and freed. reset controller. 00:25:18.993 [2024-07-15 21:40:57.814548] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:18.993 [2024-07-15 21:40:57.814566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.993 [2024-07-15 21:40:57.814574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:40:57.814583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.993 [2024-07-15 21:40:57.814590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:40:57.814598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.993 [2024-07-15 21:40:57.814605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:40:57.814612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.993 [2024-07-15 21:40:57.814619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:40:57.814628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.993 [2024-07-15 21:40:57.818187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.993 [2024-07-15 21:40:57.818213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27090 (9): Bad file descriptor 00:25:18.993 [2024-07-15 21:40:57.890545] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:18.993 [2024-07-15 21:41:02.160571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.160987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.160995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.161004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.161011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.161020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.161027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.161036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.161043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.161051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.161058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.161067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.161074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.161083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.161090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.161099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.161111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.161120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.993 [2024-07-15 21:41:02.161133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.993 [2024-07-15 21:41:02.161142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.994 [2024-07-15 21:41:02.161149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.994 [2024-07-15 21:41:02.161165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.994 [2024-07-15 21:41:02.161182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.994 [2024-07-15 21:41:02.161197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.994 [2024-07-15 21:41:02.161214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.994 [2024-07-15 21:41:02.161230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.994 [2024-07-15 21:41:02.161246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.994 [2024-07-15 21:41:02.161262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.994 [2024-07-15 21:41:02.161721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.994 [2024-07-15 21:41:02.161728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.161990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.161998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.162013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.162029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.162045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.162061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.162076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.162092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.995 [2024-07-15 21:41:02.162108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.995 [2024-07-15 21:41:02.162386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.995 [2024-07-15 21:41:02.162395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.996 [2024-07-15 21:41:02.162402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.996 [2024-07-15 21:41:02.162417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.996 [2024-07-15 21:41:02.162433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.996 [2024-07-15 21:41:02.162449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.996 [2024-07-15 21:41:02.162465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.996 [2024-07-15 21:41:02.162481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.996 [2024-07-15 21:41:02.162497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.996 [2024-07-15 21:41:02.162513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.996 [2024-07-15 21:41:02.162530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.996 [2024-07-15 21:41:02.162546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.996 [2024-07-15 21:41:02.162561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.996 [2024-07-15 21:41:02.162577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.996 [2024-07-15 21:41:02.162593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.996 [2024-07-15 21:41:02.162609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.996 [2024-07-15 21:41:02.162625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.996 [2024-07-15 21:41:02.162640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.996 [2024-07-15 21:41:02.162656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.996 [2024-07-15 21:41:02.162686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.996 [2024-07-15 21:41:02.162692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86704 len:8 PRP1 0x0 PRP2 0x0 00:25:18.996 [2024-07-15 21:41:02.162701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162738] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xff27b0 was disconnected and freed. reset controller. 00:25:18.996 [2024-07-15 21:41:02.162748] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:18.996 [2024-07-15 21:41:02.162766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.996 [2024-07-15 21:41:02.162774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.996 [2024-07-15 21:41:02.162790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.996 [2024-07-15 21:41:02.162807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.996 [2024-07-15 21:41:02.162822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.996 [2024-07-15 21:41:02.162829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.996 [2024-07-15 21:41:02.166381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.996 [2024-07-15 21:41:02.166406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27090 (9): Bad file descriptor 00:25:18.996 [2024-07-15 21:41:02.250187] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:18.996 00:25:18.996 Latency(us) 00:25:18.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.996 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:18.996 Verification LBA range: start 0x0 length 0x4000 00:25:18.996 NVMe0n1 : 15.01 11886.13 46.43 440.19 0.00 10356.69 832.85 13052.59 00:25:18.996 =================================================================================================================== 00:25:18.996 Total : 11886.13 46.43 440.19 0.00 10356.69 832.85 13052.59 00:25:18.996 Received shutdown signal, test time was about 15.000000 seconds 00:25:18.996 00:25:18.996 Latency(us) 00:25:18.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.996 =================================================================================================================== 00:25:18.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.996 21:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:18.996 21:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:18.996 21:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:18.996 21:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2294839 00:25:18.996 21:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2294839 /var/tmp/bdevperf.sock 00:25:18.996 21:41:08 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:18.996 21:41:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2294839 ']' 00:25:18.996 21:41:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:18.996 21:41:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.996 21:41:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:18.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:18.996 21:41:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.996 21:41:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:19.568 21:41:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.568 21:41:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:19.568 21:41:09 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:19.568 [2024-07-15 21:41:09.355176] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:19.829 21:41:09 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:19.829 [2024-07-15 21:41:09.523571] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:19.829 21:41:09 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:20.090 NVMe0n1 00:25:20.350 21:41:09 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:20.350 00:25:20.350 21:41:10 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:20.922 00:25:20.922 21:41:10 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:20.922 21:41:10 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:20.922 21:41:10 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:21.183 21:41:10 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:24.482 21:41:13 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:24.482 21:41:13 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:24.482 21:41:13 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2295859 00:25:24.482 21:41:13 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:24.482 21:41:13 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2295859 00:25:25.424 0 00:25:25.424 21:41:15 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:25.424 [2024-07-15 21:41:08.442604] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:25:25.424 [2024-07-15 21:41:08.442665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2294839 ] 00:25:25.424 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.424 [2024-07-15 21:41:08.501157] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.424 [2024-07-15 21:41:08.564343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.424 [2024-07-15 21:41:10.787621] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:25.424 [2024-07-15 21:41:10.787671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.424 [2024-07-15 21:41:10.787682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.424 [2024-07-15 21:41:10.787691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.424 [2024-07-15 21:41:10.787699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.424 [2024-07-15 21:41:10.787707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.424 [2024-07-15 21:41:10.787713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.424 [2024-07-15 21:41:10.787721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.424 [2024-07-15 21:41:10.787728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.424 [2024-07-15 21:41:10.787735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:25.424 [2024-07-15 21:41:10.787764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:25.424 [2024-07-15 21:41:10.787778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d0090 (9): Bad file descriptor 00:25:25.424 [2024-07-15 21:41:10.879402] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:25.424 Running I/O for 1 seconds... 00:25:25.424 00:25:25.424 Latency(us) 00:25:25.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.424 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:25.424 Verification LBA range: start 0x0 length 0x4000 00:25:25.424 NVMe0n1 : 1.01 11288.41 44.10 0.00 0.00 11280.28 2607.79 11141.12 00:25:25.424 =================================================================================================================== 00:25:25.424 Total : 11288.41 44.10 0.00 0.00 11280.28 2607.79 11141.12 00:25:25.424 21:41:15 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:25.424 21:41:15 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:25.685 21:41:15 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:25.685 21:41:15 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:25.685 21:41:15 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:25.945 21:41:15 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:26.205 21:41:15 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:29.534 21:41:18 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:29.534 21:41:18 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:29.534 21:41:18 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2294839 00:25:29.534 21:41:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2294839 ']' 00:25:29.534 21:41:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2294839 00:25:29.535 21:41:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:29.535 21:41:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:29.535 21:41:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2294839 00:25:29.535 21:41:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:29.535 21:41:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:29.535 21:41:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2294839' 00:25:29.535 killing process with pid 2294839 00:25:29.535 21:41:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2294839 00:25:29.535 21:41:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2294839 00:25:29.535 21:41:19 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:29.535 21:41:19 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:29.535 21:41:19 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:29.535 21:41:19 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:29.535 21:41:19 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:29.535 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:29.535 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:29.535 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:29.535 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:29.535 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:29.535 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:29.535 rmmod nvme_tcp 00:25:29.535 rmmod nvme_fabrics 00:25:29.795 rmmod nvme_keyring 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2291126 ']' 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2291126 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2291126 ']' 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2291126 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2291126 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2291126' 00:25:29.795 killing process with pid 2291126 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2291126 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2291126 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.795 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:29.796 21:41:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.796 21:41:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.796 21:41:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.340 21:41:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:32.340 00:25:32.340 real 0m39.212s 00:25:32.340 user 2m1.203s 00:25:32.340 sys 0m8.107s 00:25:32.340 21:41:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:32.340 21:41:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:32.340 ************************************ 00:25:32.340 END TEST nvmf_failover 00:25:32.340 ************************************ 00:25:32.340 21:41:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:32.340 21:41:21 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:32.340 21:41:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:32.340 21:41:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:32.340 21:41:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:32.340 ************************************ 00:25:32.340 START TEST nvmf_host_discovery 00:25:32.340 ************************************ 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:32.340 * Looking for test storage... 00:25:32.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.340 21:41:21 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:32.341 21:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:38.933 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:38.933 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:38.933 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:38.934 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:38.934 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:38.934 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:39.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:25:39.195 00:25:39.195 --- 10.0.0.2 ping statistics --- 00:25:39.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.195 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:25:39.195 00:25:39.195 --- 10.0.0.1 ping statistics --- 00:25:39.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.195 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2301131 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2301131 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2301131 ']' 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:39.195 21:41:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.195 [2024-07-15 21:41:28.900459] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:25:39.195 [2024-07-15 21:41:28.900522] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.195 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.195 [2024-07-15 21:41:28.987970] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.457 [2024-07-15 21:41:29.079344] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.457 [2024-07-15 21:41:29.079398] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.457 [2024-07-15 21:41:29.079406] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.457 [2024-07-15 21:41:29.079414] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.457 [2024-07-15 21:41:29.079420] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.457 [2024-07-15 21:41:29.079444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.029 [2024-07-15 21:41:29.738960] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.029 [2024-07-15 21:41:29.751161] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.029 null0 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.029 21:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:40.030 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.030 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.030 null1 00:25:40.030 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.030 21:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:40.030 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.030 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.030 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.030 21:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2301210 00:25:40.030 21:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2301210 /tmp/host.sock 00:25:40.030 21:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:40.030 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2301210 ']' 00:25:40.030 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:40.030 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:40.030 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:40.030 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:40.030 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:40.030 21:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.291 [2024-07-15 21:41:29.846030] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:25:40.291 [2024-07-15 21:41:29.846090] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2301210 ] 00:25:40.291 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.291 [2024-07-15 21:41:29.908943] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.291 [2024-07-15 21:41:29.983187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.864 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.125 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.387 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.387 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:41.387 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:41.387 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.387 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.387 [2024-07-15 21:41:30.978287] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.387 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.387 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:41.387 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.387 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.387 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.387 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.387 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.387 21:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.387 21:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:25:41.387 21:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:41.958 [2024-07-15 21:41:31.679340] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:41.958 [2024-07-15 21:41:31.679362] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:41.958 [2024-07-15 21:41:31.679375] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:42.218 [2024-07-15 21:41:31.768668] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:42.219 [2024-07-15 21:41:31.953594] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:42.219 [2024-07-15 21:41:31.953618] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:42.479 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.739 [2024-07-15 21:41:32.506370] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:42.739 [2024-07-15 21:41:32.507385] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:42.739 [2024-07-15 21:41:32.507410] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.739 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.999 [2024-07-15 21:41:32.594681] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:42.999 21:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:43.259 [2024-07-15 21:41:32.860051] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:43.259 [2024-07-15 21:41:32.860069] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:43.259 [2024-07-15 21:41:32.860076] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.200 [2024-07-15 21:41:33.790727] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:44.200 [2024-07-15 21:41:33.790749] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:44.200 [2024-07-15 21:41:33.795243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.200 [2024-07-15 21:41:33.795260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.200 [2024-07-15 21:41:33.795269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.200 [2024-07-15 21:41:33.795281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.200 [2024-07-15 21:41:33.795290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.200 [2024-07-15 21:41:33.795297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.200 [2024-07-15 21:41:33.795305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.200 [2024-07-15 21:41:33.795312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.200 [2024-07-15 21:41:33.795319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8b60 is same with the state(5) to be set 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.200 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:44.200 [2024-07-15 21:41:33.805256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8b60 (9): Bad file descriptor 00:25:44.200 [2024-07-15 21:41:33.815295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:44.200 [2024-07-15 21:41:33.815698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.200 [2024-07-15 21:41:33.815734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c8b60 with addr=10.0.0.2, port=4420 00:25:44.200 [2024-07-15 21:41:33.815745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8b60 is same with the state(5) to be set 00:25:44.200 [2024-07-15 21:41:33.815764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8b60 (9): Bad file descriptor 00:25:44.200 [2024-07-15 21:41:33.815791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:44.201 [2024-07-15 21:41:33.815799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:44.201 [2024-07-15 21:41:33.815808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:44.201 [2024-07-15 21:41:33.815834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.201 [2024-07-15 21:41:33.825352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:44.201 [2024-07-15 21:41:33.825751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.201 [2024-07-15 21:41:33.825764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c8b60 with addr=10.0.0.2, port=4420 00:25:44.201 [2024-07-15 21:41:33.825772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8b60 is same with the state(5) to be set 00:25:44.201 [2024-07-15 21:41:33.825783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8b60 (9): Bad file descriptor 00:25:44.201 [2024-07-15 21:41:33.825798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:44.201 [2024-07-15 21:41:33.825805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:44.201 [2024-07-15 21:41:33.825812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:44.201 [2024-07-15 21:41:33.825823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.201 [2024-07-15 21:41:33.835407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:44.201 [2024-07-15 21:41:33.835811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.201 [2024-07-15 21:41:33.835824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c8b60 with addr=10.0.0.2, port=4420 00:25:44.201 [2024-07-15 21:41:33.835831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8b60 is same with the state(5) to be set 00:25:44.201 [2024-07-15 21:41:33.835843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8b60 (9): Bad file descriptor 00:25:44.201 [2024-07-15 21:41:33.835859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:44.201 [2024-07-15 21:41:33.835867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:44.201 [2024-07-15 21:41:33.835874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:44.201 [2024-07-15 21:41:33.835918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.201 [2024-07-15 21:41:33.845462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:44.201 [2024-07-15 21:41:33.845778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.201 [2024-07-15 21:41:33.845789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c8b60 with addr=10.0.0.2, port=4420 00:25:44.201 [2024-07-15 21:41:33.845797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8b60 is same with the state(5) to be set 00:25:44.201 [2024-07-15 21:41:33.845808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8b60 (9): Bad file descriptor 00:25:44.201 [2024-07-15 21:41:33.845818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:44.201 [2024-07-15 21:41:33.845825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:44.201 [2024-07-15 21:41:33.845832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:44.201 [2024-07-15 21:41:33.845842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:44.201 [2024-07-15 21:41:33.855515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:44.201 [2024-07-15 21:41:33.855938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.201 [2024-07-15 21:41:33.855953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c8b60 with addr=10.0.0.2, port=4420 00:25:44.201 [2024-07-15 21:41:33.855961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8b60 is same with the state(5) to be set 00:25:44.201 [2024-07-15 21:41:33.855972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8b60 (9): Bad file descriptor 00:25:44.201 [2024-07-15 21:41:33.855995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:44.201 [2024-07-15 21:41:33.856002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:44.201 [2024-07-15 21:41:33.856009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:44.201 [2024-07-15 21:41:33.856019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.201 [2024-07-15 21:41:33.865567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:44.201 [2024-07-15 21:41:33.865785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.201 [2024-07-15 21:41:33.865797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c8b60 with addr=10.0.0.2, port=4420 00:25:44.201 [2024-07-15 21:41:33.865804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8b60 is same with the state(5) to be set 00:25:44.201 [2024-07-15 21:41:33.865816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8b60 (9): Bad file descriptor 00:25:44.201 [2024-07-15 21:41:33.865826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:44.201 [2024-07-15 21:41:33.865832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:44.201 [2024-07-15 21:41:33.865839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:44.201 [2024-07-15 21:41:33.865856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.201 [2024-07-15 21:41:33.875620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:44.201 [2024-07-15 21:41:33.876021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.201 [2024-07-15 21:41:33.876032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c8b60 with addr=10.0.0.2, port=4420 00:25:44.201 [2024-07-15 21:41:33.876039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8b60 is same with the state(5) to be set 00:25:44.201 [2024-07-15 21:41:33.876049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8b60 (9): Bad file descriptor 00:25:44.201 [2024-07-15 21:41:33.876073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:44.201 [2024-07-15 21:41:33.876080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:44.201 [2024-07-15 21:41:33.876087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:44.201 [2024-07-15 21:41:33.876097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.201 [2024-07-15 21:41:33.879079] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:44.201 [2024-07-15 21:41:33.879100] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:44.201 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.202 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.202 21:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.462 21:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.845 [2024-07-15 21:41:35.238353] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:45.845 [2024-07-15 21:41:35.238370] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:45.845 [2024-07-15 21:41:35.238382] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:45.845 [2024-07-15 21:41:35.368796] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:45.845 [2024-07-15 21:41:35.431585] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:45.845 [2024-07-15 21:41:35.431616] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.845 request: 00:25:45.845 { 00:25:45.845 "name": "nvme", 00:25:45.845 "trtype": "tcp", 00:25:45.845 "traddr": "10.0.0.2", 00:25:45.845 "adrfam": "ipv4", 00:25:45.845 "trsvcid": "8009", 00:25:45.845 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:45.845 "wait_for_attach": true, 00:25:45.845 "method": "bdev_nvme_start_discovery", 00:25:45.845 "req_id": 1 00:25:45.845 } 00:25:45.845 Got JSON-RPC error response 00:25:45.845 response: 00:25:45.845 { 00:25:45.845 "code": -17, 00:25:45.845 "message": "File exists" 00:25:45.845 } 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.845 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.845 request: 00:25:45.845 { 00:25:45.845 "name": "nvme_second", 00:25:45.845 "trtype": "tcp", 00:25:45.845 "traddr": "10.0.0.2", 00:25:45.845 "adrfam": "ipv4", 00:25:45.845 "trsvcid": "8009", 00:25:45.845 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:45.845 "wait_for_attach": true, 00:25:45.845 "method": "bdev_nvme_start_discovery", 00:25:45.845 "req_id": 1 00:25:45.845 } 00:25:45.845 Got JSON-RPC error response 00:25:45.845 response: 00:25:45.845 { 00:25:45.845 "code": -17, 00:25:45.845 "message": "File exists" 00:25:45.845 } 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.846 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.119 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.119 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:46.120 21:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:46.120 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:46.120 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:46.120 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:46.120 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:46.120 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:46.120 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:46.120 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:46.120 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.120 21:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.065 [2024-07-15 21:41:36.691176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.065 [2024-07-15 21:41:36.691209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1611970 with addr=10.0.0.2, port=8010 00:25:47.065 [2024-07-15 21:41:36.691222] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:47.065 [2024-07-15 21:41:36.691229] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:47.065 [2024-07-15 21:41:36.691237] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:48.008 [2024-07-15 21:41:37.693653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.008 [2024-07-15 21:41:37.693675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1611570 with addr=10.0.0.2, port=8010 00:25:48.008 [2024-07-15 21:41:37.693686] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:48.008 [2024-07-15 21:41:37.693692] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:48.008 [2024-07-15 21:41:37.693699] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:48.952 [2024-07-15 21:41:38.695554] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:48.952 request: 00:25:48.952 { 00:25:48.952 "name": "nvme_second", 00:25:48.952 "trtype": "tcp", 00:25:48.952 "traddr": "10.0.0.2", 00:25:48.952 "adrfam": "ipv4", 00:25:48.952 "trsvcid": "8010", 00:25:48.952 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:48.952 "wait_for_attach": false, 00:25:48.952 "attach_timeout_ms": 3000, 00:25:48.952 "method": "bdev_nvme_start_discovery", 00:25:48.952 "req_id": 1 00:25:48.952 } 00:25:48.952 Got JSON-RPC error response 00:25:48.952 response: 00:25:48.952 { 00:25:48.952 "code": -110, 00:25:48.952 "message": "Connection timed out" 00:25:48.952 } 00:25:48.952 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:48.952 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:48.952 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:48.952 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:48.952 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:48.952 21:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:48.952 21:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:48.952 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.952 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.952 21:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:48.952 21:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:48.952 21:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:48.952 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2301210 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:49.212 rmmod nvme_tcp 00:25:49.212 rmmod nvme_fabrics 00:25:49.212 rmmod nvme_keyring 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2301131 ']' 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2301131 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2301131 ']' 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2301131 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2301131 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2301131' 00:25:49.212 killing process with pid 2301131 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2301131 00:25:49.212 21:41:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2301131 00:25:49.212 21:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:49.212 21:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:49.212 21:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:49.212 21:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:49.212 21:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:49.212 21:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.212 21:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:49.212 21:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:51.760 00:25:51.760 real 0m19.363s 00:25:51.760 user 0m22.813s 00:25:51.760 sys 0m6.561s 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.760 ************************************ 00:25:51.760 END TEST nvmf_host_discovery 00:25:51.760 ************************************ 00:25:51.760 21:41:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:51.760 21:41:41 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:51.760 21:41:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:51.760 21:41:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:51.760 21:41:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:51.760 ************************************ 00:25:51.760 START TEST nvmf_host_multipath_status 00:25:51.760 ************************************ 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:51.760 * Looking for test storage... 00:25:51.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:51.760 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:51.761 21:41:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.420 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:58.421 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:58.421 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:58.421 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:58.421 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.421 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:58.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:25:58.682 00:25:58.682 --- 10.0.0.2 ping statistics --- 00:25:58.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.682 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:25:58.682 00:25:58.682 --- 10.0.0.1 ping statistics --- 00:25:58.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.682 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2307376 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2307376 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2307376 ']' 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:58.682 21:41:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.941 [2024-07-15 21:41:48.487064] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:25:58.941 [2024-07-15 21:41:48.487139] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.941 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.941 [2024-07-15 21:41:48.557699] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:58.941 [2024-07-15 21:41:48.631809] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.941 [2024-07-15 21:41:48.631848] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.942 [2024-07-15 21:41:48.631855] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.942 [2024-07-15 21:41:48.631862] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.942 [2024-07-15 21:41:48.631868] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.942 [2024-07-15 21:41:48.632005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.942 [2024-07-15 21:41:48.632004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.510 21:41:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:59.510 21:41:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:59.510 21:41:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:59.510 21:41:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:59.510 21:41:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:59.510 21:41:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.510 21:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2307376 00:25:59.510 21:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:59.770 [2024-07-15 21:41:49.440222] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.770 21:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:00.031 Malloc0 00:26:00.031 21:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:00.031 21:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:00.292 21:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.292 [2024-07-15 21:41:50.076990] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.292 21:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:00.552 [2024-07-15 21:41:50.233363] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:00.552 21:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2307733 00:26:00.552 21:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:00.552 21:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:00.552 21:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2307733 /var/tmp/bdevperf.sock 00:26:00.552 21:41:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2307733 ']' 00:26:00.552 21:41:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:00.552 21:41:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:00.552 21:41:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:00.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:00.552 21:41:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:00.552 21:41:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:01.492 21:41:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:01.492 21:41:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:01.492 21:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:01.492 21:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:02.062 Nvme0n1 00:26:02.062 21:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:02.323 Nvme0n1 00:26:02.323 21:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:02.323 21:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:04.231 21:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:04.231 21:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:04.490 21:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:04.750 21:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:05.691 21:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:05.691 21:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:05.691 21:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.691 21:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.691 21:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.691 21:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:05.974 21:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.974 21:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.974 21:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.974 21:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.974 21:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.974 21:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:06.235 21:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.235 21:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:06.235 21:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.235 21:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:06.235 21:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.235 21:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:06.235 21:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.235 21:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:06.495 21:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.495 21:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:06.495 21:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.495 21:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:06.755 21:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.755 21:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:06.755 21:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:06.755 21:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:07.015 21:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:07.956 21:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:07.956 21:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:07.956 21:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.956 21:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:08.217 21:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.217 21:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:08.217 21:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.217 21:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:08.478 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.478 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:08.478 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.478 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:08.478 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.478 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:08.478 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.478 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.739 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.739 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:08.739 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.739 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:09.000 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.000 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:09.000 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.000 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:09.000 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.000 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:09.000 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:09.261 21:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:09.261 21:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:10.645 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:10.645 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:10.645 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.645 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.645 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.645 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:10.646 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.646 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.646 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.646 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.646 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.646 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:10.909 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.909 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:10.909 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.909 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.169 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.169 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.169 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.169 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.169 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.169 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:11.169 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.169 21:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:11.431 21:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.431 21:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:11.431 21:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:11.692 21:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:11.692 21:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:12.633 21:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:12.633 21:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:12.633 21:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.893 21:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:12.893 21:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.893 21:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:12.893 21:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.893 21:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:13.153 21:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.153 21:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:13.153 21:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.153 21:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.153 21:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.153 21:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.153 21:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.153 21:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.414 21:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.414 21:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:13.414 21:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.414 21:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:13.673 21:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.673 21:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:13.674 21:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.674 21:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:13.674 21:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.674 21:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:13.674 21:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:13.934 21:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:14.194 21:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:15.161 21:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:15.161 21:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:15.161 21:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.161 21:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.161 21:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.161 21:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:15.161 21:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.161 21:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.422 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.422 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.422 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.422 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.682 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.682 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.682 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.682 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:15.682 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.682 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:15.682 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.682 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:15.941 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.941 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:15.941 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.941 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:16.201 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:16.201 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:16.201 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:16.201 21:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:16.460 21:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:17.399 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:17.399 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:17.399 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.399 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.659 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.659 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:17.659 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.659 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:17.919 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.919 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:17.919 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.919 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:17.919 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.919 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:17.919 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.919 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:18.179 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.179 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:18.179 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.179 21:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:18.438 21:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.438 21:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:18.438 21:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.438 21:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:18.438 21:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.438 21:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:18.699 21:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:18.699 21:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:18.959 21:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:18.959 21:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:19.903 21:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:19.903 21:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:19.903 21:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.903 21:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.165 21:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.165 21:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:20.165 21:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.165 21:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:20.426 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.426 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:20.426 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.426 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:20.687 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.687 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:20.687 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.687 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:20.687 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.687 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:20.687 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.687 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:20.948 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.948 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:20.948 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.948 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:21.209 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.209 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:21.209 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:21.209 21:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:21.470 21:42:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:22.411 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:22.411 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:22.411 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.411 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.671 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.671 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:22.671 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.671 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.671 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.671 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:22.671 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.671 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.931 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.931 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.931 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.931 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:23.192 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.192 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:23.192 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.192 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:23.192 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.192 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:23.192 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.192 21:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.452 21:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.452 21:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:23.452 21:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:23.717 21:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:23.717 21:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:24.656 21:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:24.656 21:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:24.656 21:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.656 21:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:24.974 21:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.974 21:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:24.974 21:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.974 21:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.234 21:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.234 21:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.234 21:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.234 21:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.234 21:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.234 21:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.234 21:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.234 21:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.495 21:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.495 21:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.495 21:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.495 21:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:25.756 21:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.756 21:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:25.756 21:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.756 21:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:25.756 21:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.756 21:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:25.756 21:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:26.015 21:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:26.275 21:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:27.217 21:42:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:27.217 21:42:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:27.217 21:42:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.217 21:42:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:27.217 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.217 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:27.217 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.478 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.478 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.478 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.478 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.478 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.739 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.739 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.739 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.739 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:27.739 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.739 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:27.739 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.739 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:28.000 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.000 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:28.000 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.000 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:28.265 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.265 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2307733 00:26:28.265 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2307733 ']' 00:26:28.265 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2307733 00:26:28.265 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:28.265 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:28.265 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2307733 00:26:28.265 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:28.265 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:28.265 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2307733' 00:26:28.265 killing process with pid 2307733 00:26:28.265 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2307733 00:26:28.265 21:42:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2307733 00:26:28.265 Connection closed with partial response: 00:26:28.265 00:26:28.265 00:26:28.265 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2307733 00:26:28.265 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:28.265 [2024-07-15 21:41:50.304560] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:26:28.265 [2024-07-15 21:41:50.304619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2307733 ] 00:26:28.265 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.265 [2024-07-15 21:41:50.354996] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.265 [2024-07-15 21:41:50.406658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.265 Running I/O for 90 seconds... 00:26:28.265 [2024-07-15 21:42:03.596284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.265 [2024-07-15 21:42:03.596318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:28.265 [2024-07-15 21:42:03.596354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.265 [2024-07-15 21:42:03.596360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:28.265 [2024-07-15 21:42:03.596371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.265 [2024-07-15 21:42:03.596376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:28.265 [2024-07-15 21:42:03.596387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.596392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.596402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.596407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.596417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.596422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.596432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.596437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.596447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.596452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.596462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.596466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.596476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.596481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.596491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.596501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.596511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.596516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.596526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.596531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.596541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.596545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.596555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.596560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.596570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.596575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.597706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.597999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.598006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.598021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.598026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.598041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.598045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.598060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.598065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.598079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.598084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.598099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.598104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.598118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.598126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:28.266 [2024-07-15 21:42:03.598141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.266 [2024-07-15 21:42:03.598146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:69168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.598988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.598993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.599008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.267 [2024-07-15 21:42:03.599013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:28.267 [2024-07-15 21:42:03.599029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:03.599033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:03.599049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:03.599054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:03.599069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:03.599074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:03.599089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:03.599094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:03.599110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:03.599115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:15.804526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:15.804563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:15.804579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:15.804594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:15.804609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:15.804629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:15.804644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:15.804659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:15.804674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:15.804688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.268 [2024-07-15 21:42:15.804703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.268 [2024-07-15 21:42:15.804719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.268 [2024-07-15 21:42:15.804734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.268 [2024-07-15 21:42:15.804750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.268 [2024-07-15 21:42:15.804764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.268 [2024-07-15 21:42:15.804779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.804789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.268 [2024-07-15 21:42:15.804795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.805233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:15.805244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.805255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:15.805260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.805270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:15.805274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.805285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:28.268 [2024-07-15 21:42:15.805289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.805300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.268 [2024-07-15 21:42:15.805304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.805314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.268 [2024-07-15 21:42:15.805319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.805329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.268 [2024-07-15 21:42:15.805334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:28.268 [2024-07-15 21:42:15.805344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.268 [2024-07-15 21:42:15.805349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:28.268 Received shutdown signal, test time was about 25.826463 seconds 00:26:28.268 00:26:28.268 Latency(us) 00:26:28.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.268 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:28.268 Verification LBA range: start 0x0 length 0x4000 00:26:28.268 Nvme0n1 : 25.83 11080.95 43.28 0.00 0.00 11532.78 354.99 3019898.88 00:26:28.268 =================================================================================================================== 00:26:28.268 Total : 11080.95 43.28 0.00 0.00 11532.78 354.99 3019898.88 00:26:28.268 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:28.530 rmmod nvme_tcp 00:26:28.530 rmmod nvme_fabrics 00:26:28.530 rmmod nvme_keyring 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2307376 ']' 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2307376 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2307376 ']' 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2307376 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:28.530 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2307376 00:26:28.791 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:28.791 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:28.791 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2307376' 00:26:28.791 killing process with pid 2307376 00:26:28.791 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2307376 00:26:28.791 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2307376 00:26:28.791 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:28.791 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:28.791 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:28.791 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:28.791 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:28.791 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.791 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.791 21:42:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.338 21:42:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:31.338 00:26:31.338 real 0m39.408s 00:26:31.338 user 1m42.103s 00:26:31.338 sys 0m10.636s 00:26:31.338 21:42:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:31.338 21:42:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:31.338 ************************************ 00:26:31.338 END TEST nvmf_host_multipath_status 00:26:31.338 ************************************ 00:26:31.338 21:42:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:31.338 21:42:20 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:31.338 21:42:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:31.338 21:42:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:31.338 21:42:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:31.338 ************************************ 00:26:31.338 START TEST nvmf_discovery_remove_ifc 00:26:31.338 ************************************ 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:31.338 * Looking for test storage... 00:26:31.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:31.338 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.339 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:31.339 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.339 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:31.339 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:31.339 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:31.339 21:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:37.966 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:37.966 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:37.966 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:37.966 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:37.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:37.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:26:37.966 00:26:37.966 --- 10.0.0.2 ping statistics --- 00:26:37.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.966 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:37.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:37.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:26:37.966 00:26:37.966 --- 10.0.0.1 ping statistics --- 00:26:37.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.966 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:37.966 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:37.967 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.967 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:37.967 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:37.967 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.967 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:37.967 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:38.228 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:38.228 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:38.228 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:38.228 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.228 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2317851 00:26:38.228 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2317851 00:26:38.228 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2317851 ']' 00:26:38.228 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.228 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:38.228 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.228 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:38.228 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.228 21:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:38.228 [2024-07-15 21:42:27.858667] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:26:38.228 [2024-07-15 21:42:27.858731] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.228 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.228 [2024-07-15 21:42:27.944355] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.490 [2024-07-15 21:42:28.037661] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:38.490 [2024-07-15 21:42:28.037707] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:38.490 [2024-07-15 21:42:28.037716] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:38.490 [2024-07-15 21:42:28.037730] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:38.490 [2024-07-15 21:42:28.037736] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:38.490 [2024-07-15 21:42:28.037761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.062 [2024-07-15 21:42:28.695783] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.062 [2024-07-15 21:42:28.703980] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:39.062 null0 00:26:39.062 [2024-07-15 21:42:28.735954] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2318183 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2318183 /tmp/host.sock 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2318183 ']' 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:39.062 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.062 21:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:39.062 [2024-07-15 21:42:28.819333] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:26:39.062 [2024-07-15 21:42:28.819406] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2318183 ] 00:26:39.062 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.322 [2024-07-15 21:42:28.882860] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.322 [2024-07-15 21:42:28.958165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.893 21:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:39.893 21:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:39.893 21:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:39.893 21:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:39.893 21:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.893 21:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.893 21:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.893 21:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:39.893 21:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.893 21:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.893 21:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.893 21:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:39.893 21:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.893 21:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.276 [2024-07-15 21:42:30.706250] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:41.276 [2024-07-15 21:42:30.706278] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:41.277 [2024-07-15 21:42:30.706292] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:41.277 [2024-07-15 21:42:30.834701] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:41.277 [2024-07-15 21:42:31.017834] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:41.277 [2024-07-15 21:42:31.017888] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:41.277 [2024-07-15 21:42:31.017911] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:41.277 [2024-07-15 21:42:31.017926] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:41.277 [2024-07-15 21:42:31.017953] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:41.277 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.277 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:41.277 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.277 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.277 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.277 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.277 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.277 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.277 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.277 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.277 [2024-07-15 21:42:31.064957] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10faa10 was disconnected and freed. delete nvme_qpair. 00:26:41.277 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:41.277 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:41.536 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:41.536 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:41.536 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.536 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.536 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.536 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.536 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.536 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.536 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.536 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.536 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:41.536 21:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:42.475 21:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.475 21:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.475 21:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.475 21:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.475 21:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.475 21:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.475 21:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.475 21:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.735 21:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:42.735 21:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.684 21:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.685 21:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.685 21:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.685 21:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.685 21:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.685 21:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.685 21:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.685 21:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.685 21:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:43.685 21:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:44.630 21:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:44.630 21:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.630 21:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.630 21:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:44.630 21:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.630 21:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:44.630 21:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.630 21:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.630 21:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:44.630 21:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:46.014 21:42:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.014 21:42:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.014 21:42:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.014 21:42:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.014 21:42:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.014 21:42:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.014 21:42:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:46.014 21:42:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.014 21:42:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:46.014 21:42:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:46.955 [2024-07-15 21:42:36.458281] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:46.955 [2024-07-15 21:42:36.458318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.955 [2024-07-15 21:42:36.458330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.955 [2024-07-15 21:42:36.458339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.955 [2024-07-15 21:42:36.458346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.955 [2024-07-15 21:42:36.458354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.955 [2024-07-15 21:42:36.458362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.955 [2024-07-15 21:42:36.458370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.955 [2024-07-15 21:42:36.458377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.955 [2024-07-15 21:42:36.458385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:46.955 [2024-07-15 21:42:36.458392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:46.955 [2024-07-15 21:42:36.458399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c12f0 is same with the state(5) to be set 00:26:46.955 [2024-07-15 21:42:36.468302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c12f0 (9): Bad file descriptor 00:26:46.955 21:42:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.955 [2024-07-15 21:42:36.478341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:46.955 21:42:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.955 21:42:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:46.955 21:42:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.955 21:42:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.956 21:42:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.956 21:42:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.896 [2024-07-15 21:42:37.513202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:47.896 [2024-07-15 21:42:37.513241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10c12f0 with addr=10.0.0.2, port=4420 00:26:47.896 [2024-07-15 21:42:37.513252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c12f0 is same with the state(5) to be set 00:26:47.896 [2024-07-15 21:42:37.513277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c12f0 (9): Bad file descriptor 00:26:47.896 [2024-07-15 21:42:37.513642] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:47.896 [2024-07-15 21:42:37.513659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:47.896 [2024-07-15 21:42:37.513666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:47.896 [2024-07-15 21:42:37.513675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:47.896 [2024-07-15 21:42:37.513690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.896 [2024-07-15 21:42:37.513699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:47.896 21:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.896 21:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:47.896 21:42:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:48.836 [2024-07-15 21:42:38.516075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:48.836 [2024-07-15 21:42:38.516094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:48.836 [2024-07-15 21:42:38.516101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:48.836 [2024-07-15 21:42:38.516109] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:48.836 [2024-07-15 21:42:38.516124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.836 [2024-07-15 21:42:38.516143] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:48.836 [2024-07-15 21:42:38.516163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.836 [2024-07-15 21:42:38.516172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.836 [2024-07-15 21:42:38.516182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.836 [2024-07-15 21:42:38.516189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.836 [2024-07-15 21:42:38.516197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.836 [2024-07-15 21:42:38.516205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.836 [2024-07-15 21:42:38.516213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.836 [2024-07-15 21:42:38.516220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.836 [2024-07-15 21:42:38.516228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.836 [2024-07-15 21:42:38.516235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.836 [2024-07-15 21:42:38.516242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:48.836 [2024-07-15 21:42:38.516756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c0730 (9): Bad file descriptor 00:26:48.836 [2024-07-15 21:42:38.517767] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:48.836 [2024-07-15 21:42:38.517778] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:48.836 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:48.836 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.836 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:48.836 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.836 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:48.836 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.836 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:48.836 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.836 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:48.836 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.836 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.096 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:49.096 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.096 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.096 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.096 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.096 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.096 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.096 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.096 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.096 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:49.096 21:42:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:50.049 21:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:50.049 21:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:50.049 21:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.049 21:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:50.049 21:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.049 21:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:50.049 21:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.049 21:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.049 21:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:50.049 21:42:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:50.989 [2024-07-15 21:42:40.569293] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:50.989 [2024-07-15 21:42:40.569318] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:50.989 [2024-07-15 21:42:40.569332] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:50.989 [2024-07-15 21:42:40.697733] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:51.247 21:42:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:51.247 21:42:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.247 21:42:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:51.247 21:42:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.247 21:42:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:51.247 21:42:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.247 21:42:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:51.247 21:42:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.247 21:42:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:51.247 21:42:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:51.247 [2024-07-15 21:42:40.884041] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:51.247 [2024-07-15 21:42:40.884084] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:51.247 [2024-07-15 21:42:40.884104] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:51.247 [2024-07-15 21:42:40.884118] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:51.247 [2024-07-15 21:42:40.884133] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:51.247 [2024-07-15 21:42:40.888614] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10e1f90 was disconnected and freed. delete nvme_qpair. 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2318183 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2318183 ']' 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2318183 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2318183 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2318183' 00:26:52.187 killing process with pid 2318183 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2318183 00:26:52.187 21:42:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2318183 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:52.448 rmmod nvme_tcp 00:26:52.448 rmmod nvme_fabrics 00:26:52.448 rmmod nvme_keyring 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2317851 ']' 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2317851 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2317851 ']' 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2317851 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2317851 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2317851' 00:26:52.448 killing process with pid 2317851 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2317851 00:26:52.448 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2317851 00:26:52.710 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:52.710 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:52.710 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:52.710 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.710 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:52.710 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.710 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.710 21:42:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.639 21:42:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:54.639 00:26:54.639 real 0m23.750s 00:26:54.639 user 0m29.291s 00:26:54.639 sys 0m6.488s 00:26:54.639 21:42:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:54.639 21:42:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.639 ************************************ 00:26:54.639 END TEST nvmf_discovery_remove_ifc 00:26:54.639 ************************************ 00:26:54.900 21:42:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:54.900 21:42:44 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:54.900 21:42:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:54.900 21:42:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.900 21:42:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:54.900 ************************************ 00:26:54.900 START TEST nvmf_identify_kernel_target 00:26:54.900 ************************************ 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:54.900 * Looking for test storage... 00:26:54.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:54.900 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:54.901 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:54.901 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.901 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:54.901 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.901 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:54.901 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:54.901 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:54.901 21:42:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:03.114 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:03.114 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:03.114 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.114 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:03.115 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:03.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:27:03.115 00:27:03.115 --- 10.0.0.2 ping statistics --- 00:27:03.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.115 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:03.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:27:03.115 00:27:03.115 --- 10.0.0.1 ping statistics --- 00:27:03.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.115 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:03.115 21:42:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:05.655 Waiting for block devices as requested 00:27:05.655 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:05.655 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:05.655 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:05.655 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:05.655 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:05.655 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:05.915 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:05.915 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:05.915 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:06.174 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:06.174 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:06.433 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:06.433 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:06.433 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:06.433 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:06.693 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:06.693 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:06.955 No valid GPT data, bailing 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:06.955 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:07.217 00:27:07.217 Discovery Log Number of Records 2, Generation counter 2 00:27:07.217 =====Discovery Log Entry 0====== 00:27:07.217 trtype: tcp 00:27:07.217 adrfam: ipv4 00:27:07.217 subtype: current discovery subsystem 00:27:07.217 treq: not specified, sq flow control disable supported 00:27:07.217 portid: 1 00:27:07.217 trsvcid: 4420 00:27:07.217 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:07.217 traddr: 10.0.0.1 00:27:07.217 eflags: none 00:27:07.217 sectype: none 00:27:07.217 =====Discovery Log Entry 1====== 00:27:07.217 trtype: tcp 00:27:07.217 adrfam: ipv4 00:27:07.217 subtype: nvme subsystem 00:27:07.217 treq: not specified, sq flow control disable supported 00:27:07.217 portid: 1 00:27:07.217 trsvcid: 4420 00:27:07.217 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:07.217 traddr: 10.0.0.1 00:27:07.217 eflags: none 00:27:07.217 sectype: none 00:27:07.217 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:07.217 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:07.217 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.217 ===================================================== 00:27:07.217 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:07.217 ===================================================== 00:27:07.217 Controller Capabilities/Features 00:27:07.217 ================================ 00:27:07.217 Vendor ID: 0000 00:27:07.217 Subsystem Vendor ID: 0000 00:27:07.217 Serial Number: f93bd355c901620d34d2 00:27:07.217 Model Number: Linux 00:27:07.217 Firmware Version: 6.7.0-68 00:27:07.217 Recommended Arb Burst: 0 00:27:07.217 IEEE OUI Identifier: 00 00 00 00:27:07.217 Multi-path I/O 00:27:07.217 May have multiple subsystem ports: No 00:27:07.217 May have multiple controllers: No 00:27:07.217 Associated with SR-IOV VF: No 00:27:07.217 Max Data Transfer Size: Unlimited 00:27:07.217 Max Number of Namespaces: 0 00:27:07.217 Max Number of I/O Queues: 1024 00:27:07.217 NVMe Specification Version (VS): 1.3 00:27:07.217 NVMe Specification Version (Identify): 1.3 00:27:07.217 Maximum Queue Entries: 1024 00:27:07.217 Contiguous Queues Required: No 00:27:07.217 Arbitration Mechanisms Supported 00:27:07.217 Weighted Round Robin: Not Supported 00:27:07.217 Vendor Specific: Not Supported 00:27:07.217 Reset Timeout: 7500 ms 00:27:07.217 Doorbell Stride: 4 bytes 00:27:07.217 NVM Subsystem Reset: Not Supported 00:27:07.217 Command Sets Supported 00:27:07.217 NVM Command Set: Supported 00:27:07.217 Boot Partition: Not Supported 00:27:07.217 Memory Page Size Minimum: 4096 bytes 00:27:07.217 Memory Page Size Maximum: 4096 bytes 00:27:07.217 Persistent Memory Region: Not Supported 00:27:07.217 Optional Asynchronous Events Supported 00:27:07.217 Namespace Attribute Notices: Not Supported 00:27:07.217 Firmware Activation Notices: Not Supported 00:27:07.217 ANA Change Notices: Not Supported 00:27:07.217 PLE Aggregate Log Change Notices: Not Supported 00:27:07.217 LBA Status Info Alert Notices: Not Supported 00:27:07.217 EGE Aggregate Log Change Notices: Not Supported 00:27:07.217 Normal NVM Subsystem Shutdown event: Not Supported 00:27:07.217 Zone Descriptor Change Notices: Not Supported 00:27:07.217 Discovery Log Change Notices: Supported 00:27:07.217 Controller Attributes 00:27:07.217 128-bit Host Identifier: Not Supported 00:27:07.217 Non-Operational Permissive Mode: Not Supported 00:27:07.217 NVM Sets: Not Supported 00:27:07.217 Read Recovery Levels: Not Supported 00:27:07.217 Endurance Groups: Not Supported 00:27:07.217 Predictable Latency Mode: Not Supported 00:27:07.217 Traffic Based Keep ALive: Not Supported 00:27:07.217 Namespace Granularity: Not Supported 00:27:07.217 SQ Associations: Not Supported 00:27:07.217 UUID List: Not Supported 00:27:07.217 Multi-Domain Subsystem: Not Supported 00:27:07.217 Fixed Capacity Management: Not Supported 00:27:07.217 Variable Capacity Management: Not Supported 00:27:07.217 Delete Endurance Group: Not Supported 00:27:07.217 Delete NVM Set: Not Supported 00:27:07.217 Extended LBA Formats Supported: Not Supported 00:27:07.217 Flexible Data Placement Supported: Not Supported 00:27:07.217 00:27:07.217 Controller Memory Buffer Support 00:27:07.217 ================================ 00:27:07.217 Supported: No 00:27:07.217 00:27:07.217 Persistent Memory Region Support 00:27:07.217 ================================ 00:27:07.217 Supported: No 00:27:07.217 00:27:07.217 Admin Command Set Attributes 00:27:07.217 ============================ 00:27:07.217 Security Send/Receive: Not Supported 00:27:07.217 Format NVM: Not Supported 00:27:07.217 Firmware Activate/Download: Not Supported 00:27:07.217 Namespace Management: Not Supported 00:27:07.217 Device Self-Test: Not Supported 00:27:07.217 Directives: Not Supported 00:27:07.217 NVMe-MI: Not Supported 00:27:07.217 Virtualization Management: Not Supported 00:27:07.217 Doorbell Buffer Config: Not Supported 00:27:07.217 Get LBA Status Capability: Not Supported 00:27:07.217 Command & Feature Lockdown Capability: Not Supported 00:27:07.217 Abort Command Limit: 1 00:27:07.217 Async Event Request Limit: 1 00:27:07.217 Number of Firmware Slots: N/A 00:27:07.217 Firmware Slot 1 Read-Only: N/A 00:27:07.217 Firmware Activation Without Reset: N/A 00:27:07.217 Multiple Update Detection Support: N/A 00:27:07.217 Firmware Update Granularity: No Information Provided 00:27:07.217 Per-Namespace SMART Log: No 00:27:07.217 Asymmetric Namespace Access Log Page: Not Supported 00:27:07.217 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:07.217 Command Effects Log Page: Not Supported 00:27:07.217 Get Log Page Extended Data: Supported 00:27:07.217 Telemetry Log Pages: Not Supported 00:27:07.217 Persistent Event Log Pages: Not Supported 00:27:07.217 Supported Log Pages Log Page: May Support 00:27:07.217 Commands Supported & Effects Log Page: Not Supported 00:27:07.217 Feature Identifiers & Effects Log Page:May Support 00:27:07.217 NVMe-MI Commands & Effects Log Page: May Support 00:27:07.217 Data Area 4 for Telemetry Log: Not Supported 00:27:07.217 Error Log Page Entries Supported: 1 00:27:07.217 Keep Alive: Not Supported 00:27:07.217 00:27:07.217 NVM Command Set Attributes 00:27:07.217 ========================== 00:27:07.217 Submission Queue Entry Size 00:27:07.217 Max: 1 00:27:07.217 Min: 1 00:27:07.217 Completion Queue Entry Size 00:27:07.217 Max: 1 00:27:07.217 Min: 1 00:27:07.217 Number of Namespaces: 0 00:27:07.217 Compare Command: Not Supported 00:27:07.217 Write Uncorrectable Command: Not Supported 00:27:07.217 Dataset Management Command: Not Supported 00:27:07.217 Write Zeroes Command: Not Supported 00:27:07.217 Set Features Save Field: Not Supported 00:27:07.217 Reservations: Not Supported 00:27:07.217 Timestamp: Not Supported 00:27:07.217 Copy: Not Supported 00:27:07.217 Volatile Write Cache: Not Present 00:27:07.217 Atomic Write Unit (Normal): 1 00:27:07.217 Atomic Write Unit (PFail): 1 00:27:07.217 Atomic Compare & Write Unit: 1 00:27:07.217 Fused Compare & Write: Not Supported 00:27:07.217 Scatter-Gather List 00:27:07.217 SGL Command Set: Supported 00:27:07.217 SGL Keyed: Not Supported 00:27:07.217 SGL Bit Bucket Descriptor: Not Supported 00:27:07.217 SGL Metadata Pointer: Not Supported 00:27:07.217 Oversized SGL: Not Supported 00:27:07.217 SGL Metadata Address: Not Supported 00:27:07.217 SGL Offset: Supported 00:27:07.217 Transport SGL Data Block: Not Supported 00:27:07.217 Replay Protected Memory Block: Not Supported 00:27:07.217 00:27:07.217 Firmware Slot Information 00:27:07.217 ========================= 00:27:07.217 Active slot: 0 00:27:07.217 00:27:07.217 00:27:07.217 Error Log 00:27:07.217 ========= 00:27:07.217 00:27:07.217 Active Namespaces 00:27:07.217 ================= 00:27:07.217 Discovery Log Page 00:27:07.217 ================== 00:27:07.217 Generation Counter: 2 00:27:07.217 Number of Records: 2 00:27:07.217 Record Format: 0 00:27:07.217 00:27:07.217 Discovery Log Entry 0 00:27:07.217 ---------------------- 00:27:07.217 Transport Type: 3 (TCP) 00:27:07.217 Address Family: 1 (IPv4) 00:27:07.217 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:07.217 Entry Flags: 00:27:07.217 Duplicate Returned Information: 0 00:27:07.217 Explicit Persistent Connection Support for Discovery: 0 00:27:07.217 Transport Requirements: 00:27:07.217 Secure Channel: Not Specified 00:27:07.217 Port ID: 1 (0x0001) 00:27:07.217 Controller ID: 65535 (0xffff) 00:27:07.217 Admin Max SQ Size: 32 00:27:07.217 Transport Service Identifier: 4420 00:27:07.217 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:07.217 Transport Address: 10.0.0.1 00:27:07.217 Discovery Log Entry 1 00:27:07.217 ---------------------- 00:27:07.217 Transport Type: 3 (TCP) 00:27:07.217 Address Family: 1 (IPv4) 00:27:07.217 Subsystem Type: 2 (NVM Subsystem) 00:27:07.217 Entry Flags: 00:27:07.217 Duplicate Returned Information: 0 00:27:07.217 Explicit Persistent Connection Support for Discovery: 0 00:27:07.217 Transport Requirements: 00:27:07.218 Secure Channel: Not Specified 00:27:07.218 Port ID: 1 (0x0001) 00:27:07.218 Controller ID: 65535 (0xffff) 00:27:07.218 Admin Max SQ Size: 32 00:27:07.218 Transport Service Identifier: 4420 00:27:07.218 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:07.218 Transport Address: 10.0.0.1 00:27:07.218 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:07.218 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.218 get_feature(0x01) failed 00:27:07.218 get_feature(0x02) failed 00:27:07.218 get_feature(0x04) failed 00:27:07.218 ===================================================== 00:27:07.218 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:07.218 ===================================================== 00:27:07.218 Controller Capabilities/Features 00:27:07.218 ================================ 00:27:07.218 Vendor ID: 0000 00:27:07.218 Subsystem Vendor ID: 0000 00:27:07.218 Serial Number: 8186119a135d0d56d8b7 00:27:07.218 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:07.218 Firmware Version: 6.7.0-68 00:27:07.218 Recommended Arb Burst: 6 00:27:07.218 IEEE OUI Identifier: 00 00 00 00:27:07.218 Multi-path I/O 00:27:07.218 May have multiple subsystem ports: Yes 00:27:07.218 May have multiple controllers: Yes 00:27:07.218 Associated with SR-IOV VF: No 00:27:07.218 Max Data Transfer Size: Unlimited 00:27:07.218 Max Number of Namespaces: 1024 00:27:07.218 Max Number of I/O Queues: 128 00:27:07.218 NVMe Specification Version (VS): 1.3 00:27:07.218 NVMe Specification Version (Identify): 1.3 00:27:07.218 Maximum Queue Entries: 1024 00:27:07.218 Contiguous Queues Required: No 00:27:07.218 Arbitration Mechanisms Supported 00:27:07.218 Weighted Round Robin: Not Supported 00:27:07.218 Vendor Specific: Not Supported 00:27:07.218 Reset Timeout: 7500 ms 00:27:07.218 Doorbell Stride: 4 bytes 00:27:07.218 NVM Subsystem Reset: Not Supported 00:27:07.218 Command Sets Supported 00:27:07.218 NVM Command Set: Supported 00:27:07.218 Boot Partition: Not Supported 00:27:07.218 Memory Page Size Minimum: 4096 bytes 00:27:07.218 Memory Page Size Maximum: 4096 bytes 00:27:07.218 Persistent Memory Region: Not Supported 00:27:07.218 Optional Asynchronous Events Supported 00:27:07.218 Namespace Attribute Notices: Supported 00:27:07.218 Firmware Activation Notices: Not Supported 00:27:07.218 ANA Change Notices: Supported 00:27:07.218 PLE Aggregate Log Change Notices: Not Supported 00:27:07.218 LBA Status Info Alert Notices: Not Supported 00:27:07.218 EGE Aggregate Log Change Notices: Not Supported 00:27:07.218 Normal NVM Subsystem Shutdown event: Not Supported 00:27:07.218 Zone Descriptor Change Notices: Not Supported 00:27:07.218 Discovery Log Change Notices: Not Supported 00:27:07.218 Controller Attributes 00:27:07.218 128-bit Host Identifier: Supported 00:27:07.218 Non-Operational Permissive Mode: Not Supported 00:27:07.218 NVM Sets: Not Supported 00:27:07.218 Read Recovery Levels: Not Supported 00:27:07.218 Endurance Groups: Not Supported 00:27:07.218 Predictable Latency Mode: Not Supported 00:27:07.218 Traffic Based Keep ALive: Supported 00:27:07.218 Namespace Granularity: Not Supported 00:27:07.218 SQ Associations: Not Supported 00:27:07.218 UUID List: Not Supported 00:27:07.218 Multi-Domain Subsystem: Not Supported 00:27:07.218 Fixed Capacity Management: Not Supported 00:27:07.218 Variable Capacity Management: Not Supported 00:27:07.218 Delete Endurance Group: Not Supported 00:27:07.218 Delete NVM Set: Not Supported 00:27:07.218 Extended LBA Formats Supported: Not Supported 00:27:07.218 Flexible Data Placement Supported: Not Supported 00:27:07.218 00:27:07.218 Controller Memory Buffer Support 00:27:07.218 ================================ 00:27:07.218 Supported: No 00:27:07.218 00:27:07.218 Persistent Memory Region Support 00:27:07.218 ================================ 00:27:07.218 Supported: No 00:27:07.218 00:27:07.218 Admin Command Set Attributes 00:27:07.218 ============================ 00:27:07.218 Security Send/Receive: Not Supported 00:27:07.218 Format NVM: Not Supported 00:27:07.218 Firmware Activate/Download: Not Supported 00:27:07.218 Namespace Management: Not Supported 00:27:07.218 Device Self-Test: Not Supported 00:27:07.218 Directives: Not Supported 00:27:07.218 NVMe-MI: Not Supported 00:27:07.218 Virtualization Management: Not Supported 00:27:07.218 Doorbell Buffer Config: Not Supported 00:27:07.218 Get LBA Status Capability: Not Supported 00:27:07.218 Command & Feature Lockdown Capability: Not Supported 00:27:07.218 Abort Command Limit: 4 00:27:07.218 Async Event Request Limit: 4 00:27:07.218 Number of Firmware Slots: N/A 00:27:07.218 Firmware Slot 1 Read-Only: N/A 00:27:07.218 Firmware Activation Without Reset: N/A 00:27:07.218 Multiple Update Detection Support: N/A 00:27:07.218 Firmware Update Granularity: No Information Provided 00:27:07.218 Per-Namespace SMART Log: Yes 00:27:07.218 Asymmetric Namespace Access Log Page: Supported 00:27:07.218 ANA Transition Time : 10 sec 00:27:07.218 00:27:07.218 Asymmetric Namespace Access Capabilities 00:27:07.218 ANA Optimized State : Supported 00:27:07.218 ANA Non-Optimized State : Supported 00:27:07.218 ANA Inaccessible State : Supported 00:27:07.218 ANA Persistent Loss State : Supported 00:27:07.218 ANA Change State : Supported 00:27:07.218 ANAGRPID is not changed : No 00:27:07.218 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:07.218 00:27:07.218 ANA Group Identifier Maximum : 128 00:27:07.218 Number of ANA Group Identifiers : 128 00:27:07.218 Max Number of Allowed Namespaces : 1024 00:27:07.218 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:07.218 Command Effects Log Page: Supported 00:27:07.218 Get Log Page Extended Data: Supported 00:27:07.218 Telemetry Log Pages: Not Supported 00:27:07.218 Persistent Event Log Pages: Not Supported 00:27:07.218 Supported Log Pages Log Page: May Support 00:27:07.218 Commands Supported & Effects Log Page: Not Supported 00:27:07.218 Feature Identifiers & Effects Log Page:May Support 00:27:07.218 NVMe-MI Commands & Effects Log Page: May Support 00:27:07.218 Data Area 4 for Telemetry Log: Not Supported 00:27:07.218 Error Log Page Entries Supported: 128 00:27:07.218 Keep Alive: Supported 00:27:07.218 Keep Alive Granularity: 1000 ms 00:27:07.218 00:27:07.218 NVM Command Set Attributes 00:27:07.218 ========================== 00:27:07.218 Submission Queue Entry Size 00:27:07.218 Max: 64 00:27:07.218 Min: 64 00:27:07.218 Completion Queue Entry Size 00:27:07.218 Max: 16 00:27:07.218 Min: 16 00:27:07.218 Number of Namespaces: 1024 00:27:07.218 Compare Command: Not Supported 00:27:07.218 Write Uncorrectable Command: Not Supported 00:27:07.218 Dataset Management Command: Supported 00:27:07.218 Write Zeroes Command: Supported 00:27:07.218 Set Features Save Field: Not Supported 00:27:07.218 Reservations: Not Supported 00:27:07.218 Timestamp: Not Supported 00:27:07.218 Copy: Not Supported 00:27:07.218 Volatile Write Cache: Present 00:27:07.218 Atomic Write Unit (Normal): 1 00:27:07.218 Atomic Write Unit (PFail): 1 00:27:07.218 Atomic Compare & Write Unit: 1 00:27:07.218 Fused Compare & Write: Not Supported 00:27:07.218 Scatter-Gather List 00:27:07.218 SGL Command Set: Supported 00:27:07.218 SGL Keyed: Not Supported 00:27:07.218 SGL Bit Bucket Descriptor: Not Supported 00:27:07.218 SGL Metadata Pointer: Not Supported 00:27:07.218 Oversized SGL: Not Supported 00:27:07.218 SGL Metadata Address: Not Supported 00:27:07.218 SGL Offset: Supported 00:27:07.218 Transport SGL Data Block: Not Supported 00:27:07.218 Replay Protected Memory Block: Not Supported 00:27:07.218 00:27:07.218 Firmware Slot Information 00:27:07.218 ========================= 00:27:07.218 Active slot: 0 00:27:07.218 00:27:07.218 Asymmetric Namespace Access 00:27:07.218 =========================== 00:27:07.218 Change Count : 0 00:27:07.218 Number of ANA Group Descriptors : 1 00:27:07.218 ANA Group Descriptor : 0 00:27:07.218 ANA Group ID : 1 00:27:07.218 Number of NSID Values : 1 00:27:07.218 Change Count : 0 00:27:07.218 ANA State : 1 00:27:07.218 Namespace Identifier : 1 00:27:07.218 00:27:07.218 Commands Supported and Effects 00:27:07.218 ============================== 00:27:07.218 Admin Commands 00:27:07.218 -------------- 00:27:07.218 Get Log Page (02h): Supported 00:27:07.218 Identify (06h): Supported 00:27:07.218 Abort (08h): Supported 00:27:07.218 Set Features (09h): Supported 00:27:07.218 Get Features (0Ah): Supported 00:27:07.218 Asynchronous Event Request (0Ch): Supported 00:27:07.218 Keep Alive (18h): Supported 00:27:07.218 I/O Commands 00:27:07.218 ------------ 00:27:07.218 Flush (00h): Supported 00:27:07.218 Write (01h): Supported LBA-Change 00:27:07.218 Read (02h): Supported 00:27:07.218 Write Zeroes (08h): Supported LBA-Change 00:27:07.218 Dataset Management (09h): Supported 00:27:07.218 00:27:07.218 Error Log 00:27:07.218 ========= 00:27:07.218 Entry: 0 00:27:07.218 Error Count: 0x3 00:27:07.218 Submission Queue Id: 0x0 00:27:07.218 Command Id: 0x5 00:27:07.218 Phase Bit: 0 00:27:07.218 Status Code: 0x2 00:27:07.218 Status Code Type: 0x0 00:27:07.218 Do Not Retry: 1 00:27:07.218 Error Location: 0x28 00:27:07.218 LBA: 0x0 00:27:07.219 Namespace: 0x0 00:27:07.219 Vendor Log Page: 0x0 00:27:07.219 ----------- 00:27:07.219 Entry: 1 00:27:07.219 Error Count: 0x2 00:27:07.219 Submission Queue Id: 0x0 00:27:07.219 Command Id: 0x5 00:27:07.219 Phase Bit: 0 00:27:07.219 Status Code: 0x2 00:27:07.219 Status Code Type: 0x0 00:27:07.219 Do Not Retry: 1 00:27:07.219 Error Location: 0x28 00:27:07.219 LBA: 0x0 00:27:07.219 Namespace: 0x0 00:27:07.219 Vendor Log Page: 0x0 00:27:07.219 ----------- 00:27:07.219 Entry: 2 00:27:07.219 Error Count: 0x1 00:27:07.219 Submission Queue Id: 0x0 00:27:07.219 Command Id: 0x4 00:27:07.219 Phase Bit: 0 00:27:07.219 Status Code: 0x2 00:27:07.219 Status Code Type: 0x0 00:27:07.219 Do Not Retry: 1 00:27:07.219 Error Location: 0x28 00:27:07.219 LBA: 0x0 00:27:07.219 Namespace: 0x0 00:27:07.219 Vendor Log Page: 0x0 00:27:07.219 00:27:07.219 Number of Queues 00:27:07.219 ================ 00:27:07.219 Number of I/O Submission Queues: 128 00:27:07.219 Number of I/O Completion Queues: 128 00:27:07.219 00:27:07.219 ZNS Specific Controller Data 00:27:07.219 ============================ 00:27:07.219 Zone Append Size Limit: 0 00:27:07.219 00:27:07.219 00:27:07.219 Active Namespaces 00:27:07.219 ================= 00:27:07.219 get_feature(0x05) failed 00:27:07.219 Namespace ID:1 00:27:07.219 Command Set Identifier: NVM (00h) 00:27:07.219 Deallocate: Supported 00:27:07.219 Deallocated/Unwritten Error: Not Supported 00:27:07.219 Deallocated Read Value: Unknown 00:27:07.219 Deallocate in Write Zeroes: Not Supported 00:27:07.219 Deallocated Guard Field: 0xFFFF 00:27:07.219 Flush: Supported 00:27:07.219 Reservation: Not Supported 00:27:07.219 Namespace Sharing Capabilities: Multiple Controllers 00:27:07.219 Size (in LBAs): 3750748848 (1788GiB) 00:27:07.219 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:07.219 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:07.219 UUID: 26d65b7a-c9dc-4187-ace4-3d95a3f147b3 00:27:07.219 Thin Provisioning: Not Supported 00:27:07.219 Per-NS Atomic Units: Yes 00:27:07.219 Atomic Write Unit (Normal): 8 00:27:07.219 Atomic Write Unit (PFail): 8 00:27:07.219 Preferred Write Granularity: 8 00:27:07.219 Atomic Compare & Write Unit: 8 00:27:07.219 Atomic Boundary Size (Normal): 0 00:27:07.219 Atomic Boundary Size (PFail): 0 00:27:07.219 Atomic Boundary Offset: 0 00:27:07.219 NGUID/EUI64 Never Reused: No 00:27:07.219 ANA group ID: 1 00:27:07.219 Namespace Write Protected: No 00:27:07.219 Number of LBA Formats: 1 00:27:07.219 Current LBA Format: LBA Format #00 00:27:07.219 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:07.219 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:07.219 rmmod nvme_tcp 00:27:07.219 rmmod nvme_fabrics 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.219 21:42:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.760 21:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:09.760 21:42:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:09.760 21:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:09.760 21:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:09.760 21:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:09.760 21:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:09.760 21:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:09.760 21:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:09.760 21:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:09.760 21:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:09.760 21:42:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:13.062 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:13.062 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:13.062 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:13.062 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:13.062 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:13.062 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:13.062 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:13.062 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:13.062 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:13.062 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:13.062 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:13.062 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:13.062 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:13.062 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:13.062 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:13.062 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:13.062 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:13.062 00:27:13.062 real 0m18.358s 00:27:13.062 user 0m4.958s 00:27:13.062 sys 0m10.359s 00:27:13.062 21:43:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:13.062 21:43:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:13.062 ************************************ 00:27:13.062 END TEST nvmf_identify_kernel_target 00:27:13.062 ************************************ 00:27:13.323 21:43:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:13.323 21:43:02 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:13.323 21:43:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:13.323 21:43:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.323 21:43:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:13.323 ************************************ 00:27:13.323 START TEST nvmf_auth_host 00:27:13.323 ************************************ 00:27:13.323 21:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:13.323 * Looking for test storage... 00:27:13.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:13.323 21:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:13.323 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:13.323 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:13.323 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:13.323 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:13.323 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:13.323 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:13.323 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:13.323 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:13.323 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:13.323 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:13.323 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:13.323 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:13.324 21:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:21.476 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:21.476 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:21.476 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.476 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:21.477 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.477 21:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:21.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:27:21.477 00:27:21.477 --- 10.0.0.2 ping statistics --- 00:27:21.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.477 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:27:21.477 00:27:21.477 --- 10.0.0.1 ping statistics --- 00:27:21.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.477 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2332358 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2332358 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2332358 ']' 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.477 21:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dd1221df2bf9f9d09ffe93706cf45ebe 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Z28 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dd1221df2bf9f9d09ffe93706cf45ebe 0 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dd1221df2bf9f9d09ffe93706cf45ebe 0 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dd1221df2bf9f9d09ffe93706cf45ebe 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Z28 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Z28 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Z28 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a67e780c79632ddb7ea0d84cd1825de1fe7ddbf3a6555b1c8d90525257bca866 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.fdU 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a67e780c79632ddb7ea0d84cd1825de1fe7ddbf3a6555b1c8d90525257bca866 3 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a67e780c79632ddb7ea0d84cd1825de1fe7ddbf3a6555b1c8d90525257bca866 3 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a67e780c79632ddb7ea0d84cd1825de1fe7ddbf3a6555b1c8d90525257bca866 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.fdU 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.fdU 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.fdU 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=63996d4c97c0a3e14bf4b58e4e09ed6ecf4195176444b5c3 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Bu7 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 63996d4c97c0a3e14bf4b58e4e09ed6ecf4195176444b5c3 0 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 63996d4c97c0a3e14bf4b58e4e09ed6ecf4195176444b5c3 0 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=63996d4c97c0a3e14bf4b58e4e09ed6ecf4195176444b5c3 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Bu7 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Bu7 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Bu7 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:21.477 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.478 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.478 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:21.478 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:21.478 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:21.478 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:21.478 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=04ea50b9bb0bee68504a46ede9b52ba47d680c2cde44b5be 00:27:21.478 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.eGQ 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 04ea50b9bb0bee68504a46ede9b52ba47d680c2cde44b5be 2 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 04ea50b9bb0bee68504a46ede9b52ba47d680c2cde44b5be 2 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=04ea50b9bb0bee68504a46ede9b52ba47d680c2cde44b5be 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.eGQ 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.eGQ 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.eGQ 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b2f6b12291ea66daa62d0d8f60ddd9b3 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SIM 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b2f6b12291ea66daa62d0d8f60ddd9b3 1 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b2f6b12291ea66daa62d0d8f60ddd9b3 1 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b2f6b12291ea66daa62d0d8f60ddd9b3 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SIM 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SIM 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.SIM 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=131a6cda0daca011117d1775a4f8c318 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.moE 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 131a6cda0daca011117d1775a4f8c318 1 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 131a6cda0daca011117d1775a4f8c318 1 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=131a6cda0daca011117d1775a4f8c318 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.moE 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.moE 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.moE 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=84a6989ace29850175317ea3f03739239ccecf6eef380b89 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zxd 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 84a6989ace29850175317ea3f03739239ccecf6eef380b89 2 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 84a6989ace29850175317ea3f03739239ccecf6eef380b89 2 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=84a6989ace29850175317ea3f03739239ccecf6eef380b89 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zxd 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zxd 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.zxd 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=789738a1efc9b5f890ca52235dd2f805 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Urg 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 789738a1efc9b5f890ca52235dd2f805 0 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 789738a1efc9b5f890ca52235dd2f805 0 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=789738a1efc9b5f890ca52235dd2f805 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:21.738 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Urg 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Urg 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Urg 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b4b009cb77f74a1f9bf5bb0d9f6c00be0fadeb2667582c5ddd39f3e93cf7f6a8 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.psQ 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b4b009cb77f74a1f9bf5bb0d9f6c00be0fadeb2667582c5ddd39f3e93cf7f6a8 3 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b4b009cb77f74a1f9bf5bb0d9f6c00be0fadeb2667582c5ddd39f3e93cf7f6a8 3 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b4b009cb77f74a1f9bf5bb0d9f6c00be0fadeb2667582c5ddd39f3e93cf7f6a8 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.psQ 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.psQ 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.psQ 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2332358 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2332358 ']' 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Z28 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.fdU ]] 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fdU 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.998 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Bu7 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.eGQ ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eGQ 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.SIM 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.moE ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.moE 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.zxd 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Urg ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Urg 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.psQ 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:22.259 21:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:25.566 Waiting for block devices as requested 00:27:25.566 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:25.566 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:25.566 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:25.566 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:25.826 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:25.826 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:25.826 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:26.086 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:26.086 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:26.346 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:26.346 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:26.346 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:26.606 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:26.606 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:26.606 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:26.866 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:26.866 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:27.867 No valid GPT data, bailing 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:27.867 00:27:27.867 Discovery Log Number of Records 2, Generation counter 2 00:27:27.867 =====Discovery Log Entry 0====== 00:27:27.867 trtype: tcp 00:27:27.867 adrfam: ipv4 00:27:27.867 subtype: current discovery subsystem 00:27:27.867 treq: not specified, sq flow control disable supported 00:27:27.867 portid: 1 00:27:27.867 trsvcid: 4420 00:27:27.867 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:27.867 traddr: 10.0.0.1 00:27:27.867 eflags: none 00:27:27.867 sectype: none 00:27:27.867 =====Discovery Log Entry 1====== 00:27:27.867 trtype: tcp 00:27:27.867 adrfam: ipv4 00:27:27.867 subtype: nvme subsystem 00:27:27.867 treq: not specified, sq flow control disable supported 00:27:27.867 portid: 1 00:27:27.867 trsvcid: 4420 00:27:27.867 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:27.867 traddr: 10.0.0.1 00:27:27.867 eflags: none 00:27:27.867 sectype: none 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.867 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.868 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.127 nvme0n1 00:27:28.127 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.127 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.127 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.127 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.127 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.127 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.127 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.127 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: ]] 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.128 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.389 nvme0n1 00:27:28.389 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.389 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.389 21:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.389 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.389 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.389 21:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.389 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.650 nvme0n1 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: ]] 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.650 nvme0n1 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.650 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: ]] 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.911 nvme0n1 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.911 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.172 nvme0n1 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: ]] 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:29.172 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.433 21:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.433 nvme0n1 00:27:29.433 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.433 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.433 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.433 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.433 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.433 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.433 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.433 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.433 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.433 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.694 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.695 nvme0n1 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.695 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: ]] 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.956 nvme0n1 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.956 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.217 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.217 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.217 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.217 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.217 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.217 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.217 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:30.217 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.217 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.217 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.217 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.217 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:30.217 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:30.217 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: ]] 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.218 nvme0n1 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.218 21:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.218 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.479 nvme0n1 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.479 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: ]] 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.740 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.001 nvme0n1 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.001 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.262 nvme0n1 00:27:31.262 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.262 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.262 21:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.262 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.262 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.262 21:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: ]] 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.262 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.523 nvme0n1 00:27:31.523 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.523 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.523 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.523 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.523 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: ]] 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.784 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.044 nvme0n1 00:27:32.044 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.044 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.044 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.044 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.045 21:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.307 nvme0n1 00:27:32.307 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.307 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.307 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.307 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.307 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.307 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.307 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.307 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.307 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.307 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.307 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: ]] 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.568 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.828 nvme0n1 00:27:32.828 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.828 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.828 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.828 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.828 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.828 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.089 21:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.090 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.090 21:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.351 nvme0n1 00:27:33.351 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.351 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.351 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.351 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.351 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: ]] 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.620 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.621 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.621 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.621 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.621 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.198 nvme0n1 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: ]] 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.198 21:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.458 nvme0n1 00:27:34.458 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.458 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.458 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.458 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.458 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.458 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.719 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.289 nvme0n1 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: ]] 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.289 21:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.290 21:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.290 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.290 21:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.862 nvme0n1 00:27:35.862 21:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.862 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.862 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.862 21:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.862 21:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.862 21:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.122 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.122 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.122 21:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.122 21:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.122 21:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.122 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.122 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:36.122 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.122 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.122 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.122 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.122 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:36.122 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:36.122 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.122 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.123 21:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.692 nvme0n1 00:27:36.692 21:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.692 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.692 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.692 21:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.692 21:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.692 21:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: ]] 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.954 21:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.524 nvme0n1 00:27:37.524 21:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.524 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.524 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.524 21:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.524 21:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.524 21:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: ]] 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.785 21:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.786 21:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:37.786 21:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.786 21:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.356 nvme0n1 00:27:38.356 21:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.356 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.356 21:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.356 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.356 21:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.356 21:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.617 21:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.187 nvme0n1 00:27:39.187 21:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.187 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.187 21:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.187 21:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.187 21:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.187 21:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: ]] 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.448 nvme0n1 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.448 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.710 nvme0n1 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: ]] 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.710 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:39.711 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.711 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.971 nvme0n1 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.971 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: ]] 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.972 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.232 nvme0n1 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.232 21:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.232 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.232 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.232 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.232 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.232 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.232 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.232 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.232 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.232 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.232 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.232 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.232 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.232 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.232 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.232 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.493 nvme0n1 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: ]] 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.493 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.754 nvme0n1 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.754 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.014 nvme0n1 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: ]] 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.014 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.015 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.015 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.015 21:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.015 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.015 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.015 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.275 nvme0n1 00:27:41.275 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.275 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.275 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.275 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.275 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.275 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.275 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.275 21:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.275 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.275 21:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: ]] 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.275 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.535 nvme0n1 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.535 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.796 nvme0n1 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: ]] 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.796 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.057 nvme0n1 00:27:42.057 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:42.317 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.318 21:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.577 nvme0n1 00:27:42.577 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.577 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.577 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.577 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.577 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.577 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.577 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.577 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.577 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.577 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.577 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.577 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: ]] 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.578 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.836 nvme0n1 00:27:42.836 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.836 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.836 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.836 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.836 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.836 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.836 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.836 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.836 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.836 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.095 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: ]] 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.096 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.357 nvme0n1 00:27:43.357 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.357 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.357 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.357 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.357 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.357 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.357 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.357 21:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.357 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.357 21:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.357 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.357 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.358 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.625 nvme0n1 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: ]] 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.625 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.256 nvme0n1 00:27:44.256 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.256 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.256 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.256 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.256 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.256 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.256 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.256 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.256 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.257 21:43:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.826 nvme0n1 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: ]] 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.826 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.403 nvme0n1 00:27:45.404 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.404 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.404 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.404 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.404 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.404 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.404 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.404 21:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.404 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.404 21:43:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: ]] 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.404 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.975 nvme0n1 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.975 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.976 21:43:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.547 nvme0n1 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: ]] 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.547 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.548 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.548 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.117 nvme0n1 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:47.117 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.118 21:43:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.066 nvme0n1 00:27:48.066 21:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.066 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.066 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.066 21:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.066 21:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: ]] 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.067 21:43:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.010 nvme0n1 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: ]] 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.010 21:43:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.581 nvme0n1 00:27:49.581 21:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.581 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.582 21:43:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.524 nvme0n1 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: ]] 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.524 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.785 nvme0n1 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.785 nvme0n1 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.785 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: ]] 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.046 nvme0n1 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.046 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: ]] 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.307 21:43:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.307 nvme0n1 00:27:51.307 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.307 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.307 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.307 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.307 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.307 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.307 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.307 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.307 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.307 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.568 nvme0n1 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: ]] 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.568 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.829 nvme0n1 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.829 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.830 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.090 nvme0n1 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.090 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: ]] 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.351 21:43:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.351 nvme0n1 00:27:52.351 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.351 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.351 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.351 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.351 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.351 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: ]] 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.612 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.613 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.613 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.613 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.613 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.613 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.613 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.613 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.613 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.613 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.613 nvme0n1 00:27:52.613 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.613 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.613 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.613 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.613 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.613 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.873 nvme0n1 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.873 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: ]] 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.135 21:43:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.397 nvme0n1 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.397 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.398 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.659 nvme0n1 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: ]] 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.659 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.920 nvme0n1 00:27:53.920 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.920 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.920 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.920 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.920 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.920 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.180 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.180 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.180 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.180 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.180 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.180 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.180 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:54.180 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: ]] 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.181 21:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.440 nvme0n1 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.440 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.441 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.701 nvme0n1 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: ]] 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.701 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.961 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.961 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.961 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.961 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.961 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.961 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.961 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.961 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.961 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.961 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.961 21:43:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.961 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:54.961 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.961 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.223 nvme0n1 00:27:55.223 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.223 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.223 21:43:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.223 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.223 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.223 21:43:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.223 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.223 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.223 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.223 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.484 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.745 nvme0n1 00:27:55.745 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.745 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.745 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.745 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.745 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.745 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: ]] 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.005 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.006 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.006 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.006 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.006 21:43:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.006 21:43:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.006 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.006 21:43:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.266 nvme0n1 00:27:56.266 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.266 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.266 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.266 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.266 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.266 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.526 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.526 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.526 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.526 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.526 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.526 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.526 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:56.526 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.526 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.526 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.526 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.526 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:56.526 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:56.526 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.526 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.526 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: ]] 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.527 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.098 nvme0n1 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.098 21:43:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.359 nvme0n1 00:27:57.359 21:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.359 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.359 21:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.359 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.359 21:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQxMjIxZGYyYmY5ZjlkMDlmZmU5MzcwNmNmNDVlYmViBKba: 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: ]] 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTY3ZTc4MGM3OTYzMmRkYjdlYTBkODRjZDE4MjVkZTFmZTdkZGJmM2E2NTU1YjFjOGQ5MDUyNTI1N2JjYTg2NmhXWUk=: 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.620 21:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.191 nvme0n1 00:27:58.191 21:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.191 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.191 21:43:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.191 21:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.192 21:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.452 21:43:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.452 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.023 nvme0n1 00:27:59.023 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.023 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.023 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.023 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.023 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjJmNmIxMjI5MWVhNjZkYWE2MmQwZDhmNjBkZGQ5YjNzX+r4: 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: ]] 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTMxYTZjZGEwZGFjYTAxMTExN2QxNzc1YTRmOGMzMTgkS20j: 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.284 21:43:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.855 nvme0n1 00:27:59.855 21:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.855 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.855 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.855 21:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.855 21:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODRhNjk4OWFjZTI5ODUwMTc1MzE3ZWEzZjAzNzM5MjM5Y2NlY2Y2ZWVmMzgwYjg5JbrTfA==: 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: ]] 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Nzg5NzM4YTFlZmM5YjVmODkwY2E1MjIzNWRkMmY4MDXlGF3d: 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.116 21:43:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.688 nvme0n1 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjRiMDA5Y2I3N2Y3NGExZjliZjViYjBkOWY2YzAwYmUwZmFkZWIyNjY3NTgyYzVkZGQzOWYzZTkzY2Y3ZjZhOAI5f+0=: 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.688 21:43:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.986 21:43:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.986 21:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.986 21:43:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.557 nvme0n1 00:28:01.557 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.557 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.557 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.557 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.557 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.557 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.557 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.557 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjM5OTZkNGM5N2MwYTNlMTRiZjRiNThlNGUwOWVkNmVjZjQxOTUxNzY0NDRiNWMzuty2nw==: 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: ]] 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDRlYTUwYjliYjBiZWU2ODUwNGE0NmVkZTliNTJiYTQ3ZDY4MGMyY2RlNDRiNWJl/E5H+g==: 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.558 request: 00:28:01.558 { 00:28:01.558 "name": "nvme0", 00:28:01.558 "trtype": "tcp", 00:28:01.558 "traddr": "10.0.0.1", 00:28:01.558 "adrfam": "ipv4", 00:28:01.558 "trsvcid": "4420", 00:28:01.558 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:01.558 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:01.558 "prchk_reftag": false, 00:28:01.558 "prchk_guard": false, 00:28:01.558 "hdgst": false, 00:28:01.558 "ddgst": false, 00:28:01.558 "method": "bdev_nvme_attach_controller", 00:28:01.558 "req_id": 1 00:28:01.558 } 00:28:01.558 Got JSON-RPC error response 00:28:01.558 response: 00:28:01.558 { 00:28:01.558 "code": -5, 00:28:01.558 "message": "Input/output error" 00:28:01.558 } 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.558 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.819 request: 00:28:01.819 { 00:28:01.819 "name": "nvme0", 00:28:01.819 "trtype": "tcp", 00:28:01.819 "traddr": "10.0.0.1", 00:28:01.819 "adrfam": "ipv4", 00:28:01.819 "trsvcid": "4420", 00:28:01.819 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:01.819 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:01.819 "prchk_reftag": false, 00:28:01.819 "prchk_guard": false, 00:28:01.819 "hdgst": false, 00:28:01.819 "ddgst": false, 00:28:01.819 "dhchap_key": "key2", 00:28:01.819 "method": "bdev_nvme_attach_controller", 00:28:01.819 "req_id": 1 00:28:01.819 } 00:28:01.819 Got JSON-RPC error response 00:28:01.819 response: 00:28:01.819 { 00:28:01.819 "code": -5, 00:28:01.819 "message": "Input/output error" 00:28:01.819 } 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.819 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.820 request: 00:28:01.820 { 00:28:01.820 "name": "nvme0", 00:28:01.820 "trtype": "tcp", 00:28:01.820 "traddr": "10.0.0.1", 00:28:01.820 "adrfam": "ipv4", 00:28:01.820 "trsvcid": "4420", 00:28:01.820 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:01.820 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:01.820 "prchk_reftag": false, 00:28:01.820 "prchk_guard": false, 00:28:01.820 "hdgst": false, 00:28:01.820 "ddgst": false, 00:28:01.820 "dhchap_key": "key1", 00:28:01.820 "dhchap_ctrlr_key": "ckey2", 00:28:01.820 "method": "bdev_nvme_attach_controller", 00:28:01.820 "req_id": 1 00:28:01.820 } 00:28:01.820 Got JSON-RPC error response 00:28:01.820 response: 00:28:01.820 { 00:28:01.820 "code": -5, 00:28:01.820 "message": "Input/output error" 00:28:01.820 } 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:01.820 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:01.820 rmmod nvme_tcp 00:28:02.081 rmmod nvme_fabrics 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2332358 ']' 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2332358 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2332358 ']' 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2332358 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2332358 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2332358' 00:28:02.081 killing process with pid 2332358 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2332358 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2332358 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.081 21:43:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.627 21:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:04.627 21:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:04.627 21:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:04.627 21:43:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:04.627 21:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:04.627 21:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:04.627 21:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:04.627 21:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:04.627 21:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:04.627 21:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:04.627 21:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:04.627 21:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:04.627 21:43:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:07.171 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:07.171 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:07.171 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:07.171 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:07.171 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:07.171 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:07.171 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:07.171 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:07.171 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:07.171 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:07.171 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:07.171 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:07.171 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:07.171 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:07.171 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:07.431 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:07.431 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:07.692 21:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Z28 /tmp/spdk.key-null.Bu7 /tmp/spdk.key-sha256.SIM /tmp/spdk.key-sha384.zxd /tmp/spdk.key-sha512.psQ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:07.692 21:43:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:10.991 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:10.991 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:10.991 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:10.991 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:10.991 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:10.991 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:10.991 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:10.991 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:10.991 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:10.991 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:10.991 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:10.991 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:10.991 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:10.991 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:10.991 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:10.991 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:10.991 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:11.253 00:28:11.253 real 0m57.999s 00:28:11.253 user 0m52.042s 00:28:11.253 sys 0m14.697s 00:28:11.253 21:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:11.253 21:44:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.253 ************************************ 00:28:11.253 END TEST nvmf_auth_host 00:28:11.253 ************************************ 00:28:11.253 21:44:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:11.253 21:44:00 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:11.253 21:44:00 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:11.253 21:44:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:11.253 21:44:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:11.253 21:44:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:11.253 ************************************ 00:28:11.253 START TEST nvmf_digest 00:28:11.253 ************************************ 00:28:11.253 21:44:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:11.514 * Looking for test storage... 00:28:11.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:11.514 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:11.515 21:44:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:18.109 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:18.109 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:18.109 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:18.109 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:18.109 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:18.110 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:18.110 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:18.110 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:18.110 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:18.110 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:18.110 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:18.110 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:18.110 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:18.371 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:18.371 21:44:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:18.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:28:18.371 00:28:18.371 --- 10.0.0.2 ping statistics --- 00:28:18.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.371 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:18.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:28:18.371 00:28:18.371 --- 10.0.0.1 ping statistics --- 00:28:18.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.371 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:18.371 ************************************ 00:28:18.371 START TEST nvmf_digest_clean 00:28:18.371 ************************************ 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2348765 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2348765 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2348765 ']' 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:18.371 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:18.371 [2024-07-15 21:44:08.159417] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:28:18.371 [2024-07-15 21:44:08.159484] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.633 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.633 [2024-07-15 21:44:08.233091] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.633 [2024-07-15 21:44:08.306648] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.633 [2024-07-15 21:44:08.306690] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.633 [2024-07-15 21:44:08.306697] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.633 [2024-07-15 21:44:08.306704] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.633 [2024-07-15 21:44:08.306709] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.633 [2024-07-15 21:44:08.306739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.204 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:19.204 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:19.204 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:19.204 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:19.204 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:19.204 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.204 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:19.204 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:19.204 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:19.204 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.204 21:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:19.465 null0 00:28:19.465 [2024-07-15 21:44:09.057642] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.465 [2024-07-15 21:44:09.081840] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.465 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.465 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:19.465 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:19.465 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:19.465 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:19.465 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:19.466 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:19.466 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:19.466 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2348997 00:28:19.466 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2348997 /var/tmp/bperf.sock 00:28:19.466 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2348997 ']' 00:28:19.466 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:19.466 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:19.466 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:19.466 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:19.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:19.466 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:19.466 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:19.466 [2024-07-15 21:44:09.136587] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:28:19.466 [2024-07-15 21:44:09.136637] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2348997 ] 00:28:19.466 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.466 [2024-07-15 21:44:09.210588] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.726 [2024-07-15 21:44:09.274728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.298 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:20.298 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:20.298 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:20.298 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:20.298 21:44:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:20.558 21:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.558 21:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.819 nvme0n1 00:28:20.819 21:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:20.819 21:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:20.819 Running I/O for 2 seconds... 00:28:23.364 00:28:23.364 Latency(us) 00:28:23.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.364 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:23.364 nvme0n1 : 2.00 20837.09 81.39 0.00 0.00 6133.96 2853.55 19223.89 00:28:23.364 =================================================================================================================== 00:28:23.364 Total : 20837.09 81.39 0.00 0.00 6133.96 2853.55 19223.89 00:28:23.364 0 00:28:23.364 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:23.365 | select(.opcode=="crc32c") 00:28:23.365 | "\(.module_name) \(.executed)"' 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2348997 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2348997 ']' 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2348997 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2348997 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2348997' 00:28:23.365 killing process with pid 2348997 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2348997 00:28:23.365 Received shutdown signal, test time was about 2.000000 seconds 00:28:23.365 00:28:23.365 Latency(us) 00:28:23.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.365 =================================================================================================================== 00:28:23.365 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2348997 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2349680 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2349680 /var/tmp/bperf.sock 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2349680 ']' 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:23.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:23.365 21:44:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:23.365 [2024-07-15 21:44:13.048133] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:28:23.365 [2024-07-15 21:44:13.048229] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2349680 ] 00:28:23.365 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:23.365 Zero copy mechanism will not be used. 00:28:23.365 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.365 [2024-07-15 21:44:13.127603] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.629 [2024-07-15 21:44:13.180036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.200 21:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:24.200 21:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:24.200 21:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:24.200 21:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:24.200 21:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:24.200 21:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.200 21:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.771 nvme0n1 00:28:24.771 21:44:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:24.771 21:44:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:24.771 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:24.771 Zero copy mechanism will not be used. 00:28:24.771 Running I/O for 2 seconds... 00:28:26.695 00:28:26.696 Latency(us) 00:28:26.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.696 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:26.696 nvme0n1 : 2.00 2104.58 263.07 0.00 0.00 7599.00 1829.55 18131.63 00:28:26.696 =================================================================================================================== 00:28:26.696 Total : 2104.58 263.07 0.00 0.00 7599.00 1829.55 18131.63 00:28:26.696 0 00:28:26.696 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:26.696 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:26.696 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:26.696 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:26.696 | select(.opcode=="crc32c") 00:28:26.696 | "\(.module_name) \(.executed)"' 00:28:26.696 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2349680 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2349680 ']' 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2349680 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2349680 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2349680' 00:28:26.990 killing process with pid 2349680 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2349680 00:28:26.990 Received shutdown signal, test time was about 2.000000 seconds 00:28:26.990 00:28:26.990 Latency(us) 00:28:26.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.990 =================================================================================================================== 00:28:26.990 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2349680 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2350404 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2350404 /var/tmp/bperf.sock 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2350404 ']' 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:26.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:26.990 21:44:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:26.990 [2024-07-15 21:44:16.777206] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:28:26.990 [2024-07-15 21:44:16.777260] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2350404 ] 00:28:27.250 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.250 [2024-07-15 21:44:16.852845] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.250 [2024-07-15 21:44:16.906491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.819 21:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:27.819 21:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:27.819 21:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:27.819 21:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:27.819 21:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:28.079 21:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.079 21:44:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.338 nvme0n1 00:28:28.338 21:44:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:28.338 21:44:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:28.338 Running I/O for 2 seconds... 00:28:30.872 00:28:30.872 Latency(us) 00:28:30.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.872 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:30.872 nvme0n1 : 2.01 21292.67 83.17 0.00 0.00 5999.93 3986.77 10485.76 00:28:30.872 =================================================================================================================== 00:28:30.872 Total : 21292.67 83.17 0.00 0.00 5999.93 3986.77 10485.76 00:28:30.872 0 00:28:30.872 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:30.872 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:30.872 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:30.872 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:30.872 | select(.opcode=="crc32c") 00:28:30.872 | "\(.module_name) \(.executed)"' 00:28:30.872 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:30.872 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2350404 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2350404 ']' 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2350404 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2350404 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2350404' 00:28:30.873 killing process with pid 2350404 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2350404 00:28:30.873 Received shutdown signal, test time was about 2.000000 seconds 00:28:30.873 00:28:30.873 Latency(us) 00:28:30.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.873 =================================================================================================================== 00:28:30.873 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2350404 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2351217 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2351217 /var/tmp/bperf.sock 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2351217 ']' 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:30.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:30.873 21:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:30.873 [2024-07-15 21:44:20.512807] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:28:30.873 [2024-07-15 21:44:20.512862] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2351217 ] 00:28:30.873 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:30.873 Zero copy mechanism will not be used. 00:28:30.873 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.873 [2024-07-15 21:44:20.588580] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.873 [2024-07-15 21:44:20.641704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.812 21:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:31.812 21:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:31.812 21:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:31.812 21:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:31.812 21:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:31.812 21:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:31.812 21:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.072 nvme0n1 00:28:32.072 21:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:32.073 21:44:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:32.073 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:32.073 Zero copy mechanism will not be used. 00:28:32.073 Running I/O for 2 seconds... 00:28:34.615 00:28:34.615 Latency(us) 00:28:34.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.615 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:34.615 nvme0n1 : 2.00 2996.88 374.61 0.00 0.00 5330.55 3959.47 22391.47 00:28:34.615 =================================================================================================================== 00:28:34.615 Total : 2996.88 374.61 0.00 0.00 5330.55 3959.47 22391.47 00:28:34.615 0 00:28:34.615 21:44:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:34.615 21:44:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:34.615 21:44:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:34.615 21:44:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:34.615 | select(.opcode=="crc32c") 00:28:34.615 | "\(.module_name) \(.executed)"' 00:28:34.615 21:44:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2351217 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2351217 ']' 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2351217 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2351217 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2351217' 00:28:34.615 killing process with pid 2351217 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2351217 00:28:34.615 Received shutdown signal, test time was about 2.000000 seconds 00:28:34.615 00:28:34.615 Latency(us) 00:28:34.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.615 =================================================================================================================== 00:28:34.615 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2351217 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2348765 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2348765 ']' 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2348765 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2348765 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2348765' 00:28:34.615 killing process with pid 2348765 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2348765 00:28:34.615 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2348765 00:28:34.615 00:28:34.615 real 0m16.272s 00:28:34.615 user 0m32.051s 00:28:34.616 sys 0m3.157s 00:28:34.616 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:34.616 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:34.616 ************************************ 00:28:34.616 END TEST nvmf_digest_clean 00:28:34.616 ************************************ 00:28:34.616 21:44:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:34.616 21:44:24 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:34.616 21:44:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:34.616 21:44:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:34.616 21:44:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:34.876 ************************************ 00:28:34.876 START TEST nvmf_digest_error 00:28:34.876 ************************************ 00:28:34.876 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:28:34.876 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:34.876 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:34.876 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:34.876 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.876 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2352091 00:28:34.876 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2352091 00:28:34.876 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:34.876 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2352091 ']' 00:28:34.876 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.876 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:34.876 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.876 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:34.876 21:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.876 [2024-07-15 21:44:24.508485] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:28:34.876 [2024-07-15 21:44:24.508540] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.876 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.876 [2024-07-15 21:44:24.576994] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.876 [2024-07-15 21:44:24.650259] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.876 [2024-07-15 21:44:24.650297] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.876 [2024-07-15 21:44:24.650305] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.876 [2024-07-15 21:44:24.650311] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.876 [2024-07-15 21:44:24.650317] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.876 [2024-07-15 21:44:24.650335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:35.816 [2024-07-15 21:44:25.324262] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:35.816 null0 00:28:35.816 [2024-07-15 21:44:25.404999] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.816 [2024-07-15 21:44:25.429210] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2352142 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2352142 /var/tmp/bperf.sock 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2352142 ']' 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:35.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:35.816 21:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:35.816 [2024-07-15 21:44:25.493032] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:28:35.816 [2024-07-15 21:44:25.493097] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352142 ] 00:28:35.816 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.816 [2024-07-15 21:44:25.570143] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.077 [2024-07-15 21:44:25.624031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.647 21:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:36.647 21:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:36.647 21:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:36.647 21:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:36.647 21:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:36.647 21:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.647 21:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.647 21:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.647 21:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.647 21:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.218 nvme0n1 00:28:37.218 21:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:37.218 21:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.218 21:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.218 21:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.218 21:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:37.218 21:44:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:37.218 Running I/O for 2 seconds... 00:28:37.218 [2024-07-15 21:44:26.877638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.218 [2024-07-15 21:44:26.877668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.218 [2024-07-15 21:44:26.877677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.218 [2024-07-15 21:44:26.891868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.218 [2024-07-15 21:44:26.891888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.218 [2024-07-15 21:44:26.891895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.218 [2024-07-15 21:44:26.904756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.218 [2024-07-15 21:44:26.904774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.218 [2024-07-15 21:44:26.904781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.218 [2024-07-15 21:44:26.916077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.218 [2024-07-15 21:44:26.916096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.218 [2024-07-15 21:44:26.916102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.218 [2024-07-15 21:44:26.928268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.218 [2024-07-15 21:44:26.928286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.218 [2024-07-15 21:44:26.928293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.218 [2024-07-15 21:44:26.941238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.218 [2024-07-15 21:44:26.941256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.218 [2024-07-15 21:44:26.941262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.218 [2024-07-15 21:44:26.952748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.218 [2024-07-15 21:44:26.952766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.218 [2024-07-15 21:44:26.952772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.218 [2024-07-15 21:44:26.965351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.218 [2024-07-15 21:44:26.965368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.218 [2024-07-15 21:44:26.965379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.218 [2024-07-15 21:44:26.977022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.218 [2024-07-15 21:44:26.977039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.218 [2024-07-15 21:44:26.977046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.218 [2024-07-15 21:44:26.989880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.218 [2024-07-15 21:44:26.989898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.218 [2024-07-15 21:44:26.989904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.218 [2024-07-15 21:44:27.002443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.218 [2024-07-15 21:44:27.002460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.218 [2024-07-15 21:44:27.002467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.218 [2024-07-15 21:44:27.015068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.218 [2024-07-15 21:44:27.015085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.218 [2024-07-15 21:44:27.015092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.026379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.026396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.026403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.039358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.039376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.039382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.051530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.051548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.051554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.064356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.064374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.064380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.076673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.076690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.076696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.088875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.088893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.088899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.100621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.100638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.100644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.112312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.112329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.112336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.125418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.125435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.125442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.138044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.138062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.138068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.149536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.149553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.149559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.161763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.161781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.161787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.174656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.174674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.174684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.186674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.186692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.186698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.198242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.198259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.198266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.210576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.210593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.210600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.224376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.224393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.224400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.236741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.236758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.236765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.248600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.248617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.248623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.260305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.479 [2024-07-15 21:44:27.260322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.479 [2024-07-15 21:44:27.260329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.479 [2024-07-15 21:44:27.273041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.480 [2024-07-15 21:44:27.273058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.480 [2024-07-15 21:44:27.273064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.741 [2024-07-15 21:44:27.286471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.741 [2024-07-15 21:44:27.286491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.741 [2024-07-15 21:44:27.286497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.741 [2024-07-15 21:44:27.298521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.741 [2024-07-15 21:44:27.298539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.741 [2024-07-15 21:44:27.298545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.741 [2024-07-15 21:44:27.310914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.741 [2024-07-15 21:44:27.310932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.741 [2024-07-15 21:44:27.310938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.741 [2024-07-15 21:44:27.322845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.741 [2024-07-15 21:44:27.322863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.741 [2024-07-15 21:44:27.322869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.741 [2024-07-15 21:44:27.335314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.741 [2024-07-15 21:44:27.335332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.741 [2024-07-15 21:44:27.335339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.741 [2024-07-15 21:44:27.347750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.741 [2024-07-15 21:44:27.347767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.741 [2024-07-15 21:44:27.347773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.741 [2024-07-15 21:44:27.360537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.741 [2024-07-15 21:44:27.360554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.741 [2024-07-15 21:44:27.360561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.741 [2024-07-15 21:44:27.373141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.741 [2024-07-15 21:44:27.373158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.741 [2024-07-15 21:44:27.373165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.741 [2024-07-15 21:44:27.384201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.741 [2024-07-15 21:44:27.384218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.741 [2024-07-15 21:44:27.384225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.741 [2024-07-15 21:44:27.396205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.741 [2024-07-15 21:44:27.396223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.741 [2024-07-15 21:44:27.396229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.741 [2024-07-15 21:44:27.408653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.741 [2024-07-15 21:44:27.408671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.742 [2024-07-15 21:44:27.408677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.742 [2024-07-15 21:44:27.421402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.742 [2024-07-15 21:44:27.421420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.742 [2024-07-15 21:44:27.421426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.742 [2024-07-15 21:44:27.433573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.742 [2024-07-15 21:44:27.433590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.742 [2024-07-15 21:44:27.433596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.742 [2024-07-15 21:44:27.445887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.742 [2024-07-15 21:44:27.445905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.742 [2024-07-15 21:44:27.445911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.742 [2024-07-15 21:44:27.457285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.742 [2024-07-15 21:44:27.457303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.742 [2024-07-15 21:44:27.457309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.742 [2024-07-15 21:44:27.470914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.742 [2024-07-15 21:44:27.470931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.742 [2024-07-15 21:44:27.470938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.742 [2024-07-15 21:44:27.483591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.742 [2024-07-15 21:44:27.483608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.742 [2024-07-15 21:44:27.483614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.742 [2024-07-15 21:44:27.495141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.742 [2024-07-15 21:44:27.495158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.742 [2024-07-15 21:44:27.495167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.742 [2024-07-15 21:44:27.508276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.742 [2024-07-15 21:44:27.508292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.742 [2024-07-15 21:44:27.508299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.742 [2024-07-15 21:44:27.519701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.742 [2024-07-15 21:44:27.519719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.742 [2024-07-15 21:44:27.519725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.742 [2024-07-15 21:44:27.532136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.742 [2024-07-15 21:44:27.532152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.742 [2024-07-15 21:44:27.532159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.742 [2024-07-15 21:44:27.543973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:37.742 [2024-07-15 21:44:27.543990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.742 [2024-07-15 21:44:27.543997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.557033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.557051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.557057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.569825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.569843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.569852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.581872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.581890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.581896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.593983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.594000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.594006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.606257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.606278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.606284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.618232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.618249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.618255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.630390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.630406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.630413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.642462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.642479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.642485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.654730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.654747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.654754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.666835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.666852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.666858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.679988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.680005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.680011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.692656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.692672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.692679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.704750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.704767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.704776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.717019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.717036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.717043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.728887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.728905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.728911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.740600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.740617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.740623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.753264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.753282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.753288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.765002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.765020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.765026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.777667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.777685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.777691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.790895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.790912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.790918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.004 [2024-07-15 21:44:27.803934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.004 [2024-07-15 21:44:27.803951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.004 [2024-07-15 21:44:27.803958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.267 [2024-07-15 21:44:27.816312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.267 [2024-07-15 21:44:27.816332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.267 [2024-07-15 21:44:27.816338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.267 [2024-07-15 21:44:27.827421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.267 [2024-07-15 21:44:27.827438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.267 [2024-07-15 21:44:27.827444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.267 [2024-07-15 21:44:27.841432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.267 [2024-07-15 21:44:27.841449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.267 [2024-07-15 21:44:27.841455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.267 [2024-07-15 21:44:27.853173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.267 [2024-07-15 21:44:27.853191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.267 [2024-07-15 21:44:27.853197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.267 [2024-07-15 21:44:27.865239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.267 [2024-07-15 21:44:27.865256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.267 [2024-07-15 21:44:27.865262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.267 [2024-07-15 21:44:27.878956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.267 [2024-07-15 21:44:27.878973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.267 [2024-07-15 21:44:27.878980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.267 [2024-07-15 21:44:27.891727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.267 [2024-07-15 21:44:27.891743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.267 [2024-07-15 21:44:27.891749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.267 [2024-07-15 21:44:27.902967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.267 [2024-07-15 21:44:27.902984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.267 [2024-07-15 21:44:27.902991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.267 [2024-07-15 21:44:27.915070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.267 [2024-07-15 21:44:27.915087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.267 [2024-07-15 21:44:27.915093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.267 [2024-07-15 21:44:27.928508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.267 [2024-07-15 21:44:27.928524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.267 [2024-07-15 21:44:27.928530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.267 [2024-07-15 21:44:27.940943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.267 [2024-07-15 21:44:27.940959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.267 [2024-07-15 21:44:27.940966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.267 [2024-07-15 21:44:27.950953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.268 [2024-07-15 21:44:27.950970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.268 [2024-07-15 21:44:27.950976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.268 [2024-07-15 21:44:27.963403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.268 [2024-07-15 21:44:27.963420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.268 [2024-07-15 21:44:27.963426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.268 [2024-07-15 21:44:27.976833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.268 [2024-07-15 21:44:27.976850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.268 [2024-07-15 21:44:27.976856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.268 [2024-07-15 21:44:27.991086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.268 [2024-07-15 21:44:27.991103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.268 [2024-07-15 21:44:27.991110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.268 [2024-07-15 21:44:28.002371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.268 [2024-07-15 21:44:28.002388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.268 [2024-07-15 21:44:28.002394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.268 [2024-07-15 21:44:28.014033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.268 [2024-07-15 21:44:28.014050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.268 [2024-07-15 21:44:28.014056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.268 [2024-07-15 21:44:28.026081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.268 [2024-07-15 21:44:28.026097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.268 [2024-07-15 21:44:28.026105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.268 [2024-07-15 21:44:28.038423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.268 [2024-07-15 21:44:28.038439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.268 [2024-07-15 21:44:28.038446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.268 [2024-07-15 21:44:28.050730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.268 [2024-07-15 21:44:28.050746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.268 [2024-07-15 21:44:28.050752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.268 [2024-07-15 21:44:28.063353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.268 [2024-07-15 21:44:28.063369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.268 [2024-07-15 21:44:28.063375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.075203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.075220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.075227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.087910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.087926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.087932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.101231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.101247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.101253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.113134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.113150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.113157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.124540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.124556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.124562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.137865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.137887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.137893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.149946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.149962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.149968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.162527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.162543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.162549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.174133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.174149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.174155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.185844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.185860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.185866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.199986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.200003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.200009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.211311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.211327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.211333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.223909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.223925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.223931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.236353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.236369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.236375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.249227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.249244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.249250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.261526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.261542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.261549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.272715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.272731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.272737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.284626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.284642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.284649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.297784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.297799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.297806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.308865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.529 [2024-07-15 21:44:28.308881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.529 [2024-07-15 21:44:28.308888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.529 [2024-07-15 21:44:28.321619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.530 [2024-07-15 21:44:28.321636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.530 [2024-07-15 21:44:28.321642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.791 [2024-07-15 21:44:28.334593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.791 [2024-07-15 21:44:28.334610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-07-15 21:44:28.334616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.791 [2024-07-15 21:44:28.346848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.791 [2024-07-15 21:44:28.346866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-07-15 21:44:28.346873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.791 [2024-07-15 21:44:28.360203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.791 [2024-07-15 21:44:28.360220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-07-15 21:44:28.360225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.791 [2024-07-15 21:44:28.371394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.791 [2024-07-15 21:44:28.371410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-07-15 21:44:28.371416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.791 [2024-07-15 21:44:28.383008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.791 [2024-07-15 21:44:28.383024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-07-15 21:44:28.383031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.791 [2024-07-15 21:44:28.395035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.791 [2024-07-15 21:44:28.395051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-07-15 21:44:28.395057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.791 [2024-07-15 21:44:28.408231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.791 [2024-07-15 21:44:28.408248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-07-15 21:44:28.408255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.791 [2024-07-15 21:44:28.420273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.791 [2024-07-15 21:44:28.420289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-07-15 21:44:28.420295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.791 [2024-07-15 21:44:28.432524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.792 [2024-07-15 21:44:28.432541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-07-15 21:44:28.432547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-07-15 21:44:28.444540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.792 [2024-07-15 21:44:28.444556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-07-15 21:44:28.444563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-07-15 21:44:28.457006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.792 [2024-07-15 21:44:28.457023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-07-15 21:44:28.457029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-07-15 21:44:28.469403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.792 [2024-07-15 21:44:28.469420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-07-15 21:44:28.469426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-07-15 21:44:28.482257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.792 [2024-07-15 21:44:28.482273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-07-15 21:44:28.482280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-07-15 21:44:28.494699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.792 [2024-07-15 21:44:28.494716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-07-15 21:44:28.494722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-07-15 21:44:28.507026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.792 [2024-07-15 21:44:28.507042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-07-15 21:44:28.507048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-07-15 21:44:28.519946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.792 [2024-07-15 21:44:28.519963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-07-15 21:44:28.519969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-07-15 21:44:28.530908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.792 [2024-07-15 21:44:28.530924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-07-15 21:44:28.530931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-07-15 21:44:28.543760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.792 [2024-07-15 21:44:28.543776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-07-15 21:44:28.543783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-07-15 21:44:28.556606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.792 [2024-07-15 21:44:28.556622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-07-15 21:44:28.556631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-07-15 21:44:28.569318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.792 [2024-07-15 21:44:28.569335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-07-15 21:44:28.569341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-07-15 21:44:28.581437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.792 [2024-07-15 21:44:28.581453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-07-15 21:44:28.581459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-07-15 21:44:28.593906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:38.792 [2024-07-15 21:44:28.593923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-07-15 21:44:28.593929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.053 [2024-07-15 21:44:28.606319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.053 [2024-07-15 21:44:28.606337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.053 [2024-07-15 21:44:28.606343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.053 [2024-07-15 21:44:28.618272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.053 [2024-07-15 21:44:28.618288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.053 [2024-07-15 21:44:28.618295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.053 [2024-07-15 21:44:28.630628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.053 [2024-07-15 21:44:28.630644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.053 [2024-07-15 21:44:28.630650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.053 [2024-07-15 21:44:28.643810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.053 [2024-07-15 21:44:28.643826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.053 [2024-07-15 21:44:28.643833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.053 [2024-07-15 21:44:28.655464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.053 [2024-07-15 21:44:28.655480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.053 [2024-07-15 21:44:28.655486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.053 [2024-07-15 21:44:28.667579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.053 [2024-07-15 21:44:28.667598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.053 [2024-07-15 21:44:28.667604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 [2024-07-15 21:44:28.679621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.054 [2024-07-15 21:44:28.679636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-07-15 21:44:28.679642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 [2024-07-15 21:44:28.692647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.054 [2024-07-15 21:44:28.692663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-07-15 21:44:28.692669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 [2024-07-15 21:44:28.705069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.054 [2024-07-15 21:44:28.705085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-07-15 21:44:28.705091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 [2024-07-15 21:44:28.716902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.054 [2024-07-15 21:44:28.716918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-07-15 21:44:28.716924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 [2024-07-15 21:44:28.729029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.054 [2024-07-15 21:44:28.729045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-07-15 21:44:28.729051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 [2024-07-15 21:44:28.741139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.054 [2024-07-15 21:44:28.741155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-07-15 21:44:28.741161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 [2024-07-15 21:44:28.753243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.054 [2024-07-15 21:44:28.753258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-07-15 21:44:28.753264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 [2024-07-15 21:44:28.765743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.054 [2024-07-15 21:44:28.765759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-07-15 21:44:28.765765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 [2024-07-15 21:44:28.777817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.054 [2024-07-15 21:44:28.777832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-07-15 21:44:28.777839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 [2024-07-15 21:44:28.790346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.054 [2024-07-15 21:44:28.790362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-07-15 21:44:28.790368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 [2024-07-15 21:44:28.802116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.054 [2024-07-15 21:44:28.802135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-07-15 21:44:28.802142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 [2024-07-15 21:44:28.815737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.054 [2024-07-15 21:44:28.815754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-07-15 21:44:28.815760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 [2024-07-15 21:44:28.826375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.054 [2024-07-15 21:44:28.826391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-07-15 21:44:28.826397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 [2024-07-15 21:44:28.839955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.054 [2024-07-15 21:44:28.839972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-07-15 21:44:28.839978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 [2024-07-15 21:44:28.851791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.054 [2024-07-15 21:44:28.851807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-07-15 21:44:28.851813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.315 [2024-07-15 21:44:28.863769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c7cbb0) 00:28:39.315 [2024-07-15 21:44:28.863786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.315 [2024-07-15 21:44:28.863792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.315 00:28:39.315 Latency(us) 00:28:39.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.315 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:39.315 nvme0n1 : 2.00 20662.11 80.71 0.00 0.00 6187.26 3331.41 15073.28 00:28:39.315 =================================================================================================================== 00:28:39.315 Total : 20662.11 80.71 0.00 0.00 6187.26 3331.41 15073.28 00:28:39.315 0 00:28:39.315 21:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:39.315 21:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:39.315 21:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:39.315 | .driver_specific 00:28:39.315 | .nvme_error 00:28:39.315 | .status_code 00:28:39.315 | .command_transient_transport_error' 00:28:39.315 21:44:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:39.315 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:28:39.315 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2352142 00:28:39.315 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2352142 ']' 00:28:39.315 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2352142 00:28:39.315 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:39.315 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:39.315 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2352142 00:28:39.315 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:39.315 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:39.315 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2352142' 00:28:39.315 killing process with pid 2352142 00:28:39.315 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2352142 00:28:39.315 Received shutdown signal, test time was about 2.000000 seconds 00:28:39.315 00:28:39.315 Latency(us) 00:28:39.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.315 =================================================================================================================== 00:28:39.315 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:39.315 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2352142 00:28:39.577 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:39.577 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:39.577 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:39.577 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:39.577 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:39.577 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2352942 00:28:39.577 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2352942 /var/tmp/bperf.sock 00:28:39.577 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2352942 ']' 00:28:39.577 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:39.577 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:39.577 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:39.577 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:39.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:39.577 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:39.577 21:44:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.577 [2024-07-15 21:44:29.267618] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:28:39.577 [2024-07-15 21:44:29.267673] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352942 ] 00:28:39.577 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:39.577 Zero copy mechanism will not be used. 00:28:39.577 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.577 [2024-07-15 21:44:29.343326] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.838 [2024-07-15 21:44:29.396614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.408 21:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:40.408 21:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:40.408 21:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:40.408 21:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:40.408 21:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:40.408 21:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.408 21:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.408 21:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.408 21:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.408 21:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.668 nvme0n1 00:28:40.929 21:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:40.929 21:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.929 21:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.929 21:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.929 21:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:40.929 21:44:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:40.929 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:40.929 Zero copy mechanism will not be used. 00:28:40.929 Running I/O for 2 seconds... 00:28:40.929 [2024-07-15 21:44:30.589304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:40.929 [2024-07-15 21:44:30.589334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-15 21:44:30.589342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.929 [2024-07-15 21:44:30.604607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:40.929 [2024-07-15 21:44:30.604630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-15 21:44:30.604638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.929 [2024-07-15 21:44:30.618336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:40.929 [2024-07-15 21:44:30.618355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-15 21:44:30.618362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.929 [2024-07-15 21:44:30.634773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:40.929 [2024-07-15 21:44:30.634792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-15 21:44:30.634799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.929 [2024-07-15 21:44:30.648168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:40.929 [2024-07-15 21:44:30.648186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-15 21:44:30.648193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.929 [2024-07-15 21:44:30.663819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:40.929 [2024-07-15 21:44:30.663837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-15 21:44:30.663843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.929 [2024-07-15 21:44:30.680267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:40.929 [2024-07-15 21:44:30.680285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.929 [2024-07-15 21:44:30.680291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.929 [2024-07-15 21:44:30.696973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:40.930 [2024-07-15 21:44:30.696991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.930 [2024-07-15 21:44:30.696997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.930 [2024-07-15 21:44:30.711915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:40.930 [2024-07-15 21:44:30.711933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.930 [2024-07-15 21:44:30.711940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.930 [2024-07-15 21:44:30.726346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:40.930 [2024-07-15 21:44:30.726364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.930 [2024-07-15 21:44:30.726370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.191 [2024-07-15 21:44:30.742616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.191 [2024-07-15 21:44:30.742632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.191 [2024-07-15 21:44:30.742639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.191 [2024-07-15 21:44:30.756486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.191 [2024-07-15 21:44:30.756503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.191 [2024-07-15 21:44:30.756509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.191 [2024-07-15 21:44:30.773281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.191 [2024-07-15 21:44:30.773298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.191 [2024-07-15 21:44:30.773304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.191 [2024-07-15 21:44:30.786415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.191 [2024-07-15 21:44:30.786432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.191 [2024-07-15 21:44:30.786438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.191 [2024-07-15 21:44:30.801993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.191 [2024-07-15 21:44:30.802010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.191 [2024-07-15 21:44:30.802016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.191 [2024-07-15 21:44:30.817251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.191 [2024-07-15 21:44:30.817269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.191 [2024-07-15 21:44:30.817275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.191 [2024-07-15 21:44:30.831963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.191 [2024-07-15 21:44:30.831981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.191 [2024-07-15 21:44:30.831987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.191 [2024-07-15 21:44:30.848676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.191 [2024-07-15 21:44:30.848694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.191 [2024-07-15 21:44:30.848700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.191 [2024-07-15 21:44:30.865331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.191 [2024-07-15 21:44:30.865348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.191 [2024-07-15 21:44:30.865358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.191 [2024-07-15 21:44:30.879651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.191 [2024-07-15 21:44:30.879669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.191 [2024-07-15 21:44:30.879675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.191 [2024-07-15 21:44:30.896426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.192 [2024-07-15 21:44:30.896443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.192 [2024-07-15 21:44:30.896449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.192 [2024-07-15 21:44:30.912933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.192 [2024-07-15 21:44:30.912951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.192 [2024-07-15 21:44:30.912957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.192 [2024-07-15 21:44:30.927498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.192 [2024-07-15 21:44:30.927515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.192 [2024-07-15 21:44:30.927521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.192 [2024-07-15 21:44:30.943194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.192 [2024-07-15 21:44:30.943212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.192 [2024-07-15 21:44:30.943217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.192 [2024-07-15 21:44:30.956723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.192 [2024-07-15 21:44:30.956741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.192 [2024-07-15 21:44:30.956747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.192 [2024-07-15 21:44:30.970989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.192 [2024-07-15 21:44:30.971006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.192 [2024-07-15 21:44:30.971012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.192 [2024-07-15 21:44:30.986475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.192 [2024-07-15 21:44:30.986493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.192 [2024-07-15 21:44:30.986499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.001933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.001950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.001956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.015558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.015575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.015581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.030977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.030995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.031001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.046909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.046926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.046932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.061502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.061520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.061526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.078928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.078945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.078952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.091439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.091457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.091462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.106355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.106372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.106378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.121941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.121958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.121968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.136159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.136176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.136182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.147884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.147901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.147907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.164229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.164247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.164253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.179905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.179922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.179928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.195867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.195885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.195891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.212155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.212172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.212178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.226698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.226715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.226721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.453 [2024-07-15 21:44:31.243428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.453 [2024-07-15 21:44:31.243445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.453 [2024-07-15 21:44:31.243451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.258800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.258821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.258827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.274046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.274063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.274069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.289832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.289850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.289855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.305741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.305758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.305764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.321903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.321921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.321927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.337059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.337077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.337083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.353015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.353033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.353039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.368228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.368246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.368252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.384680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.384697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.384703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.401181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.401198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.401205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.417584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.417601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.417607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.433066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.433083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.433089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.448222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.448239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.448245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.463678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.463695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.463701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.479490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.479507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.479513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.496611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.496628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.496634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.714 [2024-07-15 21:44:31.512481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.714 [2024-07-15 21:44:31.512498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.714 [2024-07-15 21:44:31.512504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.974 [2024-07-15 21:44:31.528094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.974 [2024-07-15 21:44:31.528112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.974 [2024-07-15 21:44:31.528121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.974 [2024-07-15 21:44:31.544927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.974 [2024-07-15 21:44:31.544944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.974 [2024-07-15 21:44:31.544950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.974 [2024-07-15 21:44:31.560570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.974 [2024-07-15 21:44:31.560588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.974 [2024-07-15 21:44:31.560594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.974 [2024-07-15 21:44:31.577618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.974 [2024-07-15 21:44:31.577635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.974 [2024-07-15 21:44:31.577641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.974 [2024-07-15 21:44:31.593919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.974 [2024-07-15 21:44:31.593936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.974 [2024-07-15 21:44:31.593942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.974 [2024-07-15 21:44:31.609600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.974 [2024-07-15 21:44:31.609617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.974 [2024-07-15 21:44:31.609623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.974 [2024-07-15 21:44:31.625496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.974 [2024-07-15 21:44:31.625512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.974 [2024-07-15 21:44:31.625518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.974 [2024-07-15 21:44:31.639897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.974 [2024-07-15 21:44:31.639914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.974 [2024-07-15 21:44:31.639920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.974 [2024-07-15 21:44:31.655653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.974 [2024-07-15 21:44:31.655670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.974 [2024-07-15 21:44:31.655676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.974 [2024-07-15 21:44:31.670456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.974 [2024-07-15 21:44:31.670474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.974 [2024-07-15 21:44:31.670480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.974 [2024-07-15 21:44:31.686296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.974 [2024-07-15 21:44:31.686313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.974 [2024-07-15 21:44:31.686319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.974 [2024-07-15 21:44:31.701433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.974 [2024-07-15 21:44:31.701450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.974 [2024-07-15 21:44:31.701457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.974 [2024-07-15 21:44:31.718144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.974 [2024-07-15 21:44:31.718161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.974 [2024-07-15 21:44:31.718167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.974 [2024-07-15 21:44:31.733940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.975 [2024-07-15 21:44:31.733957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.975 [2024-07-15 21:44:31.733963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.975 [2024-07-15 21:44:31.748488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.975 [2024-07-15 21:44:31.748505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.975 [2024-07-15 21:44:31.748511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.975 [2024-07-15 21:44:31.762232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.975 [2024-07-15 21:44:31.762248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.975 [2024-07-15 21:44:31.762254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.975 [2024-07-15 21:44:31.775807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:41.975 [2024-07-15 21:44:31.775824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.975 [2024-07-15 21:44:31.775830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:31.792046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:31.792063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:31.792072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:31.804005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:31.804022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:31.804028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:31.820485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:31.820502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:31.820508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:31.836085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:31.836101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:31.836107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:31.850858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:31.850875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:31.850881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:31.862572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:31.862589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:31.862595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:31.878139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:31.878156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:31.878162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:31.892416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:31.892433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:31.892439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:31.908228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:31.908245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:31.908251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:31.924320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:31.924340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:31.924346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:31.940205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:31.940222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:31.940228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:31.956976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:31.956992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:31.956999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:31.972838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:31.972855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:31.972861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:31.989424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:31.989441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:31.989447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:32.005505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:32.005522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:32.005528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:32.021499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:32.021516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:32.021522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.235 [2024-07-15 21:44:32.037709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.235 [2024-07-15 21:44:32.037726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.235 [2024-07-15 21:44:32.037732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.054170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.054188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.054194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.069700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.069717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.069724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.085818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.085835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.085841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.102990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.103008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.103014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.115896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.115913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.115919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.130050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.130067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.130073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.146527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.146544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.146550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.161387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.161404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.161410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.177001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.177018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.177024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.190445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.190466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.190471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.205596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.205614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.205620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.221524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.221541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.221547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.237001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.237018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.237024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.251751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.251768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.251774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.267032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.267048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.267054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.281755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.281772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.281778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.496 [2024-07-15 21:44:32.297494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.496 [2024-07-15 21:44:32.297511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.496 [2024-07-15 21:44:32.297517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.757 [2024-07-15 21:44:32.312129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.757 [2024-07-15 21:44:32.312146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.757 [2024-07-15 21:44:32.312152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.757 [2024-07-15 21:44:32.325722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.757 [2024-07-15 21:44:32.325739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.757 [2024-07-15 21:44:32.325745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.757 [2024-07-15 21:44:32.336963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.757 [2024-07-15 21:44:32.336980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.757 [2024-07-15 21:44:32.336986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.757 [2024-07-15 21:44:32.351888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.757 [2024-07-15 21:44:32.351905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.757 [2024-07-15 21:44:32.351911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.757 [2024-07-15 21:44:32.366318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.757 [2024-07-15 21:44:32.366335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.757 [2024-07-15 21:44:32.366341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.757 [2024-07-15 21:44:32.381496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.757 [2024-07-15 21:44:32.381513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.757 [2024-07-15 21:44:32.381519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.757 [2024-07-15 21:44:32.397868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.757 [2024-07-15 21:44:32.397885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.757 [2024-07-15 21:44:32.397891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.757 [2024-07-15 21:44:32.412292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.757 [2024-07-15 21:44:32.412309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.757 [2024-07-15 21:44:32.412315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.757 [2024-07-15 21:44:32.425816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.757 [2024-07-15 21:44:32.425834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.757 [2024-07-15 21:44:32.425840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.757 [2024-07-15 21:44:32.440574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.757 [2024-07-15 21:44:32.440592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.757 [2024-07-15 21:44:32.440601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.757 [2024-07-15 21:44:32.451406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.757 [2024-07-15 21:44:32.451423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.757 [2024-07-15 21:44:32.451429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.757 [2024-07-15 21:44:32.465941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.757 [2024-07-15 21:44:32.465959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.757 [2024-07-15 21:44:32.465965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.757 [2024-07-15 21:44:32.478009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.758 [2024-07-15 21:44:32.478027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.758 [2024-07-15 21:44:32.478033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.758 [2024-07-15 21:44:32.494530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.758 [2024-07-15 21:44:32.494548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.758 [2024-07-15 21:44:32.494554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.758 [2024-07-15 21:44:32.510289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.758 [2024-07-15 21:44:32.510306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.758 [2024-07-15 21:44:32.510312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.758 [2024-07-15 21:44:32.524482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.758 [2024-07-15 21:44:32.524500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.758 [2024-07-15 21:44:32.524507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.758 [2024-07-15 21:44:32.540293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.758 [2024-07-15 21:44:32.540310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.758 [2024-07-15 21:44:32.540317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.758 [2024-07-15 21:44:32.554089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:42.758 [2024-07-15 21:44:32.554106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.758 [2024-07-15 21:44:32.554112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.018 [2024-07-15 21:44:32.569620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1237b20) 00:28:43.018 [2024-07-15 21:44:32.569641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.018 [2024-07-15 21:44:32.569647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.018 00:28:43.018 Latency(us) 00:28:43.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.018 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:43.018 nvme0n1 : 2.00 2036.94 254.62 0.00 0.00 7851.06 4259.84 18131.63 00:28:43.018 =================================================================================================================== 00:28:43.018 Total : 2036.94 254.62 0.00 0.00 7851.06 4259.84 18131.63 00:28:43.018 0 00:28:43.018 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:43.018 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:43.018 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:43.018 | .driver_specific 00:28:43.018 | .nvme_error 00:28:43.018 | .status_code 00:28:43.018 | .command_transient_transport_error' 00:28:43.018 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:43.018 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 131 > 0 )) 00:28:43.018 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2352942 00:28:43.018 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2352942 ']' 00:28:43.018 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2352942 00:28:43.018 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:43.018 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:43.018 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2352942 00:28:43.018 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:43.018 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:43.018 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2352942' 00:28:43.018 killing process with pid 2352942 00:28:43.018 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2352942 00:28:43.018 Received shutdown signal, test time was about 2.000000 seconds 00:28:43.018 00:28:43.019 Latency(us) 00:28:43.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.019 =================================================================================================================== 00:28:43.019 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:43.019 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2352942 00:28:43.279 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:43.279 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:43.279 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:43.279 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:43.279 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:43.279 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2353710 00:28:43.279 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2353710 /var/tmp/bperf.sock 00:28:43.279 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2353710 ']' 00:28:43.279 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:43.279 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:43.279 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:43.279 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:43.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:43.279 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:43.279 21:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.279 [2024-07-15 21:44:32.978083] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:28:43.279 [2024-07-15 21:44:32.978143] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2353710 ] 00:28:43.279 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.279 [2024-07-15 21:44:33.050435] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.542 [2024-07-15 21:44:33.103617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.111 21:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:44.111 21:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:44.111 21:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.111 21:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.370 21:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:44.370 21:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.370 21:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.370 21:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.370 21:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.370 21:44:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.630 nvme0n1 00:28:44.630 21:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:44.630 21:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.630 21:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.630 21:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.630 21:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:44.630 21:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.920 Running I/O for 2 seconds... 00:28:44.920 [2024-07-15 21:44:34.489443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.920 [2024-07-15 21:44:34.489879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.920 [2024-07-15 21:44:34.489908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.920 [2024-07-15 21:44:34.501728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.920 [2024-07-15 21:44:34.502190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.920 [2024-07-15 21:44:34.502208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.920 [2024-07-15 21:44:34.514037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.514534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.514551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.921 [2024-07-15 21:44:34.526262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.526544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.526560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.921 [2024-07-15 21:44:34.538458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.538713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.538730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.921 [2024-07-15 21:44:34.550575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.551025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.551040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.921 [2024-07-15 21:44:34.562764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.563212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.563227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.921 [2024-07-15 21:44:34.574890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.575284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.575300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.921 [2024-07-15 21:44:34.587044] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.587460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.587476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.921 [2024-07-15 21:44:34.599206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.599676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.599692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.921 [2024-07-15 21:44:34.611415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.611898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.611914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.921 [2024-07-15 21:44:34.623535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.624011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.624027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.921 [2024-07-15 21:44:34.635696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.635995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.636011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.921 [2024-07-15 21:44:34.647848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.648115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.648134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.921 [2024-07-15 21:44:34.659970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.660350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.660365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.921 [2024-07-15 21:44:34.672166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.672644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.672658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.921 [2024-07-15 21:44:34.684290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.684561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.684576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.921 [2024-07-15 21:44:34.696428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.696729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.696744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.921 [2024-07-15 21:44:34.708552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:44.921 [2024-07-15 21:44:34.708813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.921 [2024-07-15 21:44:34.708828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.182 [2024-07-15 21:44:34.720676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.182 [2024-07-15 21:44:34.721138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.182 [2024-07-15 21:44:34.721153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.182 [2024-07-15 21:44:34.732818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.733214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.733229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.745112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.745379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.745394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.757251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.757632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.757647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.769382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.769695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.769710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.781535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.781827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.781842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.793633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.793902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.793917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.805747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.806035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.806052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.817861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.818247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.818262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.830032] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.830336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.830352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.842154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.842482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.842498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.854306] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.854618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.854632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.866435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.866910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.866925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.878571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.878838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.878853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.890723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.890981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.890996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.902898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.903174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.903190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.915031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.915475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.915490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.927178] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.927581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.927596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.939304] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.939661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.939676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.951422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.951702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.951717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.963555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.963974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.963989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.183 [2024-07-15 21:44:34.975666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.183 [2024-07-15 21:44:34.976046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.183 [2024-07-15 21:44:34.976061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:34.987795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:34.988217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:34.988233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:34.999919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.000212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.000232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.012055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.012534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.012550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.024183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.024575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.024591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.036531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.036818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.036833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.048637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.049124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.049139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.060736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.061029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.061044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.072845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.073241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.073256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.085049] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.085434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.085450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.097118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.097589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.097605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.109263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.109546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.109561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.121385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.121870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.121887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.133516] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.133799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.133814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.145596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.145892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.145907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.157723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.158064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.158079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.169837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.170279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.170294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.181944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.182394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.182409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.194054] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.194433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.194448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.206205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.206490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.206505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.218286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.444 [2024-07-15 21:44:35.218737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.444 [2024-07-15 21:44:35.218752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.444 [2024-07-15 21:44:35.230404] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.445 [2024-07-15 21:44:35.230675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.445 [2024-07-15 21:44:35.230693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.445 [2024-07-15 21:44:35.242469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.445 [2024-07-15 21:44:35.242923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.445 [2024-07-15 21:44:35.242938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.705 [2024-07-15 21:44:35.254595] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.705 [2024-07-15 21:44:35.254987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.705 [2024-07-15 21:44:35.255002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.705 [2024-07-15 21:44:35.266742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.705 [2024-07-15 21:44:35.267141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.705 [2024-07-15 21:44:35.267156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.705 [2024-07-15 21:44:35.278830] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.705 [2024-07-15 21:44:35.279254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.705 [2024-07-15 21:44:35.279269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.705 [2024-07-15 21:44:35.290986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.705 [2024-07-15 21:44:35.291411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.705 [2024-07-15 21:44:35.291426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.705 [2024-07-15 21:44:35.303126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.705 [2024-07-15 21:44:35.303539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.705 [2024-07-15 21:44:35.303554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.705 [2024-07-15 21:44:35.315215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.705 [2024-07-15 21:44:35.315637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.705 [2024-07-15 21:44:35.315652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.705 [2024-07-15 21:44:35.327334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.705 [2024-07-15 21:44:35.327717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.705 [2024-07-15 21:44:35.327732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.705 [2024-07-15 21:44:35.339489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.705 [2024-07-15 21:44:35.339791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.705 [2024-07-15 21:44:35.339806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.705 [2024-07-15 21:44:35.351558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.706 [2024-07-15 21:44:35.351822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.706 [2024-07-15 21:44:35.351837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.706 [2024-07-15 21:44:35.363671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.706 [2024-07-15 21:44:35.364055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.706 [2024-07-15 21:44:35.364069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.706 [2024-07-15 21:44:35.375814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.706 [2024-07-15 21:44:35.376108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.706 [2024-07-15 21:44:35.376125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.706 [2024-07-15 21:44:35.387901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.706 [2024-07-15 21:44:35.388371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.706 [2024-07-15 21:44:35.388386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.706 [2024-07-15 21:44:35.400066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.706 [2024-07-15 21:44:35.400352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.706 [2024-07-15 21:44:35.400367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.706 [2024-07-15 21:44:35.412151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.706 [2024-07-15 21:44:35.412425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.706 [2024-07-15 21:44:35.412440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.706 [2024-07-15 21:44:35.424271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.706 [2024-07-15 21:44:35.424696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.706 [2024-07-15 21:44:35.424711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.706 [2024-07-15 21:44:35.436410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.706 [2024-07-15 21:44:35.436858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.706 [2024-07-15 21:44:35.436873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.706 [2024-07-15 21:44:35.448540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.706 [2024-07-15 21:44:35.448880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.706 [2024-07-15 21:44:35.448896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.706 [2024-07-15 21:44:35.460659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.706 [2024-07-15 21:44:35.460932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.706 [2024-07-15 21:44:35.460947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.706 [2024-07-15 21:44:35.472872] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.706 [2024-07-15 21:44:35.473145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.706 [2024-07-15 21:44:35.473160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.706 [2024-07-15 21:44:35.484977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.706 [2024-07-15 21:44:35.485253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.706 [2024-07-15 21:44:35.485269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.706 [2024-07-15 21:44:35.497093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.706 [2024-07-15 21:44:35.497465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.706 [2024-07-15 21:44:35.497479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.706 [2024-07-15 21:44:35.509236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.706 [2024-07-15 21:44:35.509519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.706 [2024-07-15 21:44:35.509533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.521435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.521885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.521900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.533571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.533874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.533889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.545655] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.546124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.546141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.557785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.558077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.558092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.569895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.570327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.570343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.582044] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.582531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.582546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.594346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.594613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.594628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.606471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.606886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.606901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.618571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.618847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.618863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.630681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.630983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.630998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.642793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.643196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.643211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.654891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.655160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.655175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.667012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.667323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.667338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.679129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.679531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.679546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.691324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.691587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.691603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.703395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.703748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.703763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.715596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.716032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.716048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.727712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.727983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.727997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.739886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.740163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.740178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.752006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.752285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.752300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:45.968 [2024-07-15 21:44:35.764094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:45.968 [2024-07-15 21:44:35.764473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.968 [2024-07-15 21:44:35.764488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.229 [2024-07-15 21:44:35.776247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.229 [2024-07-15 21:44:35.776719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.229 [2024-07-15 21:44:35.776734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.229 [2024-07-15 21:44:35.788391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.229 [2024-07-15 21:44:35.788874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.229 [2024-07-15 21:44:35.788889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.229 [2024-07-15 21:44:35.800469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.229 [2024-07-15 21:44:35.800874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.229 [2024-07-15 21:44:35.800889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:35.812654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:35.812933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:35.812948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:35.824735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:35.825219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:35.825234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:35.836830] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:35.837209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:35.837224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:35.848977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:35.849313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:35.849329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:35.861142] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:35.861438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:35.861453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:35.873207] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:35.873646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:35.873661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:35.885324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:35.885717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:35.885732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:35.897423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:35.897823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:35.897838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:35.909561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:35.909825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:35.909840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:35.921653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:35.922080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:35.922096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:35.933815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:35.934265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:35.934280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:35.945939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:35.946351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:35.946366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:35.958068] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:35.958457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:35.958473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:35.970202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:35.970660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:35.970677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:35.982306] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:35.982764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:35.982778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:35.994507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:35.994779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:35.994794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:36.006635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:36.006947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:36.006961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:36.018822] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:36.019110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:36.019128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.230 [2024-07-15 21:44:36.031154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.230 [2024-07-15 21:44:36.031422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.230 [2024-07-15 21:44:36.031436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.043301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.043689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.043704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.055400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.055799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.055814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.067537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.067822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.067837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.079687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.079955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.079969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.091803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.092105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.092120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.103947] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.104390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.104406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.116099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.116501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.116516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.128286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.128577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.128592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.140375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.140825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.140840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.152490] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.152898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.152913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.164621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.165057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.165073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.176872] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.177149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.177169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.188975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.189431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.189446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.201115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.201515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.201530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.213240] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.213646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.213661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.225412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.225714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.225729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.237558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.491 [2024-07-15 21:44:36.237955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.491 [2024-07-15 21:44:36.237970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.491 [2024-07-15 21:44:36.249662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.492 [2024-07-15 21:44:36.250054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.492 [2024-07-15 21:44:36.250069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.492 [2024-07-15 21:44:36.261816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.492 [2024-07-15 21:44:36.262083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.492 [2024-07-15 21:44:36.262098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.492 [2024-07-15 21:44:36.273936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.492 [2024-07-15 21:44:36.274470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.492 [2024-07-15 21:44:36.274485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.492 [2024-07-15 21:44:36.286095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.492 [2024-07-15 21:44:36.286492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.492 [2024-07-15 21:44:36.286510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.752 [2024-07-15 21:44:36.298255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.752 [2024-07-15 21:44:36.298542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.752 [2024-07-15 21:44:36.298557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.752 [2024-07-15 21:44:36.310405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.752 [2024-07-15 21:44:36.310675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.752 [2024-07-15 21:44:36.310691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.752 [2024-07-15 21:44:36.322572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.752 [2024-07-15 21:44:36.322979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.752 [2024-07-15 21:44:36.322994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.752 [2024-07-15 21:44:36.334704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.752 [2024-07-15 21:44:36.335181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.752 [2024-07-15 21:44:36.335195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.752 [2024-07-15 21:44:36.346785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.752 [2024-07-15 21:44:36.347055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.752 [2024-07-15 21:44:36.347070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.752 [2024-07-15 21:44:36.358873] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.752 [2024-07-15 21:44:36.359167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.752 [2024-07-15 21:44:36.359182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.752 [2024-07-15 21:44:36.371014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.752 [2024-07-15 21:44:36.371309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.752 [2024-07-15 21:44:36.371324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.752 [2024-07-15 21:44:36.383230] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.752 [2024-07-15 21:44:36.383634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.752 [2024-07-15 21:44:36.383649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.752 [2024-07-15 21:44:36.395336] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.752 [2024-07-15 21:44:36.395744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.752 [2024-07-15 21:44:36.395762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.752 [2024-07-15 21:44:36.407467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.752 [2024-07-15 21:44:36.407865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.752 [2024-07-15 21:44:36.407880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.752 [2024-07-15 21:44:36.419643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.752 [2024-07-15 21:44:36.420063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.752 [2024-07-15 21:44:36.420078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.752 [2024-07-15 21:44:36.431803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.752 [2024-07-15 21:44:36.432208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.752 [2024-07-15 21:44:36.432223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.752 [2024-07-15 21:44:36.443922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.752 [2024-07-15 21:44:36.444218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.752 [2024-07-15 21:44:36.444233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.752 [2024-07-15 21:44:36.456056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.752 [2024-07-15 21:44:36.456558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.752 [2024-07-15 21:44:36.456573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.752 [2024-07-15 21:44:36.468172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2471260) with pdu=0x2000190fc998 00:28:46.753 [2024-07-15 21:44:36.468623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.753 [2024-07-15 21:44:36.468638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:46.753 00:28:46.753 Latency(us) 00:28:46.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.753 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:46.753 nvme0n1 : 2.01 20965.26 81.90 0.00 0.00 6093.59 5379.41 14964.05 00:28:46.753 =================================================================================================================== 00:28:46.753 Total : 20965.26 81.90 0.00 0.00 6093.59 5379.41 14964.05 00:28:46.753 0 00:28:46.753 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:46.753 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:46.753 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:46.753 | .driver_specific 00:28:46.753 | .nvme_error 00:28:46.753 | .status_code 00:28:46.753 | .command_transient_transport_error' 00:28:46.753 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:47.012 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:28:47.012 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2353710 00:28:47.012 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2353710 ']' 00:28:47.012 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2353710 00:28:47.012 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:47.012 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:47.012 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2353710 00:28:47.012 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:47.012 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:47.012 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2353710' 00:28:47.012 killing process with pid 2353710 00:28:47.012 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2353710 00:28:47.012 Received shutdown signal, test time was about 2.000000 seconds 00:28:47.012 00:28:47.012 Latency(us) 00:28:47.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.012 =================================================================================================================== 00:28:47.012 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:47.012 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2353710 00:28:47.272 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:47.272 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:47.272 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:47.272 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:47.272 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:47.272 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2354496 00:28:47.272 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2354496 /var/tmp/bperf.sock 00:28:47.272 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2354496 ']' 00:28:47.272 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:47.272 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:47.272 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:47.272 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:47.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:47.272 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:47.272 21:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.272 [2024-07-15 21:44:36.882934] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:28:47.272 [2024-07-15 21:44:36.882991] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2354496 ] 00:28:47.272 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:47.272 Zero copy mechanism will not be used. 00:28:47.272 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.272 [2024-07-15 21:44:36.953267] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.272 [2024-07-15 21:44:37.006911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.840 21:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:47.840 21:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:47.841 21:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:47.841 21:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:48.100 21:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:48.100 21:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.100 21:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:48.100 21:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.100 21:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.100 21:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.361 nvme0n1 00:28:48.361 21:44:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:48.361 21:44:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.361 21:44:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:48.361 21:44:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.361 21:44:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:48.361 21:44:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:48.622 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:48.622 Zero copy mechanism will not be used. 00:28:48.622 Running I/O for 2 seconds... 00:28:48.622 [2024-07-15 21:44:38.202109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.202613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.202641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.217479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.217835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.217855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.230793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.231168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.231185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.242393] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.242731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.242748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.253539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.253873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.253890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.264315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.264648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.264666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.275806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.276141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.276158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.286404] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.286770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.286787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.296496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.296623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.296639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.307012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.307252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.307268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.317019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.317261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.317276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.327225] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.327536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.327556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.338131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.338505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.338521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.349693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.349824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.349839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.360946] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.361284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.361300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.371848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.372085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.372102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.381688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.382022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.382038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.392325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.392678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.392695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.402249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.402411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.402426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.414026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.414397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.414414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.622 [2024-07-15 21:44:38.425592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.622 [2024-07-15 21:44:38.425932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.622 [2024-07-15 21:44:38.425949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.881 [2024-07-15 21:44:38.438199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.881 [2024-07-15 21:44:38.438577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-07-15 21:44:38.438594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.881 [2024-07-15 21:44:38.449888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.881 [2024-07-15 21:44:38.450227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-07-15 21:44:38.450243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.881 [2024-07-15 21:44:38.461759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.881 [2024-07-15 21:44:38.462086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-07-15 21:44:38.462102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.881 [2024-07-15 21:44:38.475210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.475576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.475593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.486761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.487103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.487120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.499093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.499466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.499482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.510975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.511315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.511332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.522777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.523107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.523127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.535452] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.535781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.535796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.546807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.547042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.547058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.557632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.557795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.557809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.569743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.570078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.570094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.580782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.581019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.581036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.591529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.591765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.591789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.603267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.603622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.603638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.614078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.614447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.614463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.625211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.625399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.625416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.637465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.637838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.637854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.648186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.648520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.648537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.659997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.660430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.660447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.671799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.672133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.672149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.882 [2024-07-15 21:44:38.683680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:48.882 [2024-07-15 21:44:38.683800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-07-15 21:44:38.683814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.696151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.696336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.696351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.709587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.709824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.709841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.722620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.722941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.722957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.735180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.735339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.735354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.746999] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.747347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.747363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.758898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.759024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.759038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.772189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.772539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.772555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.784359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.784595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.784612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.796576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.796906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.796922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.808364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.808635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.808651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.820907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.821196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.821212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.832903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.833094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.833111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.844250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.844348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.844363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.855488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.855803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.855819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.865640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.865796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.865810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.876561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.877021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.877037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.886346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.886643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.886660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.896732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.141 [2024-07-15 21:44:38.896993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.141 [2024-07-15 21:44:38.897010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.141 [2024-07-15 21:44:38.906873] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.142 [2024-07-15 21:44:38.907253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-07-15 21:44:38.907269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.142 [2024-07-15 21:44:38.917421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.142 [2024-07-15 21:44:38.917828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-07-15 21:44:38.917844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.142 [2024-07-15 21:44:38.928182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.142 [2024-07-15 21:44:38.928459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-07-15 21:44:38.928475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.142 [2024-07-15 21:44:38.938798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.142 [2024-07-15 21:44:38.939262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.142 [2024-07-15 21:44:38.939278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.401 [2024-07-15 21:44:38.949542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.401 [2024-07-15 21:44:38.949893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.401 [2024-07-15 21:44:38.949909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.401 [2024-07-15 21:44:38.961243] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.401 [2024-07-15 21:44:38.961771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.401 [2024-07-15 21:44:38.961787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.401 [2024-07-15 21:44:38.971887] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.401 [2024-07-15 21:44:38.972109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.401 [2024-07-15 21:44:38.972130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.401 [2024-07-15 21:44:38.981353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.401 [2024-07-15 21:44:38.981717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.401 [2024-07-15 21:44:38.981734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.401 [2024-07-15 21:44:38.991156] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.401 [2024-07-15 21:44:38.991490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.401 [2024-07-15 21:44:38.991507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.401 [2024-07-15 21:44:39.000867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.401 [2024-07-15 21:44:39.001231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.401 [2024-07-15 21:44:39.001247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.010566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.010936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.010952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.021997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.022507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.022523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.033275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.033711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.033727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.044084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.044332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.044348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.054919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.055333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.055349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.066514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.066890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.066906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.078206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.078522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.078539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.088953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.089340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.089357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.101166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.101712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.101728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.113300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.113641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.113661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.124300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.124579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.124595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.135014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.135231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.135247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.145473] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.145753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.145770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.156166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.156548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.156563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.167146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.167441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.167457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.178077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.178358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.178374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.188464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.188865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.188881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.402 [2024-07-15 21:44:39.199079] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.402 [2024-07-15 21:44:39.199393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.402 [2024-07-15 21:44:39.199409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.661 [2024-07-15 21:44:39.208582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.661 [2024-07-15 21:44:39.208895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.661 [2024-07-15 21:44:39.208911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.661 [2024-07-15 21:44:39.218219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.661 [2024-07-15 21:44:39.218639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.661 [2024-07-15 21:44:39.218655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.661 [2024-07-15 21:44:39.228303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.661 [2024-07-15 21:44:39.228689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.661 [2024-07-15 21:44:39.228705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.661 [2024-07-15 21:44:39.238604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.661 [2024-07-15 21:44:39.238990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.661 [2024-07-15 21:44:39.239006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.661 [2024-07-15 21:44:39.249195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.661 [2024-07-15 21:44:39.249579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.661 [2024-07-15 21:44:39.249595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.661 [2024-07-15 21:44:39.259071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.661 [2024-07-15 21:44:39.259440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.661 [2024-07-15 21:44:39.259456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.661 [2024-07-15 21:44:39.269583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.661 [2024-07-15 21:44:39.269906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.661 [2024-07-15 21:44:39.269922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.661 [2024-07-15 21:44:39.279142] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.661 [2024-07-15 21:44:39.279422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.661 [2024-07-15 21:44:39.279438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.661 [2024-07-15 21:44:39.289169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.661 [2024-07-15 21:44:39.289528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.661 [2024-07-15 21:44:39.289544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.661 [2024-07-15 21:44:39.299296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.661 [2024-07-15 21:44:39.299642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.661 [2024-07-15 21:44:39.299659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.661 [2024-07-15 21:44:39.308949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.661 [2024-07-15 21:44:39.309268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.661 [2024-07-15 21:44:39.309284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.662 [2024-07-15 21:44:39.319901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.662 [2024-07-15 21:44:39.320142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.662 [2024-07-15 21:44:39.320158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.662 [2024-07-15 21:44:39.330514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.662 [2024-07-15 21:44:39.330893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.662 [2024-07-15 21:44:39.330909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.662 [2024-07-15 21:44:39.340321] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.662 [2024-07-15 21:44:39.340685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.662 [2024-07-15 21:44:39.340702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.662 [2024-07-15 21:44:39.351242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.662 [2024-07-15 21:44:39.351598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.662 [2024-07-15 21:44:39.351614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.662 [2024-07-15 21:44:39.361722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.662 [2024-07-15 21:44:39.362179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.662 [2024-07-15 21:44:39.362195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.662 [2024-07-15 21:44:39.373282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.662 [2024-07-15 21:44:39.373523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.662 [2024-07-15 21:44:39.373539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.662 [2024-07-15 21:44:39.383951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.662 [2024-07-15 21:44:39.384248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.662 [2024-07-15 21:44:39.384266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.662 [2024-07-15 21:44:39.394011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.662 [2024-07-15 21:44:39.394256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.662 [2024-07-15 21:44:39.394272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.662 [2024-07-15 21:44:39.404100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.662 [2024-07-15 21:44:39.404429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.662 [2024-07-15 21:44:39.404446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.662 [2024-07-15 21:44:39.415407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.662 [2024-07-15 21:44:39.415863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.662 [2024-07-15 21:44:39.415878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.662 [2024-07-15 21:44:39.426270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.662 [2024-07-15 21:44:39.426423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.662 [2024-07-15 21:44:39.426438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.662 [2024-07-15 21:44:39.436276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.662 [2024-07-15 21:44:39.436654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.662 [2024-07-15 21:44:39.436670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.662 [2024-07-15 21:44:39.446966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.662 [2024-07-15 21:44:39.447453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.662 [2024-07-15 21:44:39.447470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.662 [2024-07-15 21:44:39.458544] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.662 [2024-07-15 21:44:39.458838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.662 [2024-07-15 21:44:39.458855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.921 [2024-07-15 21:44:39.469587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.921 [2024-07-15 21:44:39.470045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.921 [2024-07-15 21:44:39.470062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.921 [2024-07-15 21:44:39.480827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.921 [2024-07-15 21:44:39.481130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.921 [2024-07-15 21:44:39.481147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.921 [2024-07-15 21:44:39.492273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.921 [2024-07-15 21:44:39.492636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.921 [2024-07-15 21:44:39.492653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.921 [2024-07-15 21:44:39.503299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.921 [2024-07-15 21:44:39.503660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.921 [2024-07-15 21:44:39.503676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.921 [2024-07-15 21:44:39.514891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.921 [2024-07-15 21:44:39.515298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.921 [2024-07-15 21:44:39.515314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.921 [2024-07-15 21:44:39.526813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.921 [2024-07-15 21:44:39.527303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.921 [2024-07-15 21:44:39.527319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.921 [2024-07-15 21:44:39.537870] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.921 [2024-07-15 21:44:39.538106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.921 [2024-07-15 21:44:39.538128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.921 [2024-07-15 21:44:39.547762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.921 [2024-07-15 21:44:39.548034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.921 [2024-07-15 21:44:39.548050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.921 [2024-07-15 21:44:39.558444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.921 [2024-07-15 21:44:39.558861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.921 [2024-07-15 21:44:39.558878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.921 [2024-07-15 21:44:39.569860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.921 [2024-07-15 21:44:39.570176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.921 [2024-07-15 21:44:39.570193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.921 [2024-07-15 21:44:39.581029] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.921 [2024-07-15 21:44:39.581390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.921 [2024-07-15 21:44:39.581406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.921 [2024-07-15 21:44:39.592544] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.922 [2024-07-15 21:44:39.592983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.922 [2024-07-15 21:44:39.592999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.922 [2024-07-15 21:44:39.602527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.922 [2024-07-15 21:44:39.602857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.922 [2024-07-15 21:44:39.602873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.922 [2024-07-15 21:44:39.612628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.922 [2024-07-15 21:44:39.612910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.922 [2024-07-15 21:44:39.612926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.922 [2024-07-15 21:44:39.622554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.922 [2024-07-15 21:44:39.622770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.922 [2024-07-15 21:44:39.622785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.922 [2024-07-15 21:44:39.632474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.922 [2024-07-15 21:44:39.632825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.922 [2024-07-15 21:44:39.632841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.922 [2024-07-15 21:44:39.643686] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.922 [2024-07-15 21:44:39.644068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.922 [2024-07-15 21:44:39.644085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.922 [2024-07-15 21:44:39.654588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.922 [2024-07-15 21:44:39.654932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.922 [2024-07-15 21:44:39.654948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.922 [2024-07-15 21:44:39.665931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.922 [2024-07-15 21:44:39.666299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.922 [2024-07-15 21:44:39.666319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.922 [2024-07-15 21:44:39.676602] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.922 [2024-07-15 21:44:39.676888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.922 [2024-07-15 21:44:39.676904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.922 [2024-07-15 21:44:39.687519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.922 [2024-07-15 21:44:39.687765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.922 [2024-07-15 21:44:39.687781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.922 [2024-07-15 21:44:39.698025] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.922 [2024-07-15 21:44:39.698431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.922 [2024-07-15 21:44:39.698447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.922 [2024-07-15 21:44:39.709427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.922 [2024-07-15 21:44:39.709807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.922 [2024-07-15 21:44:39.709823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.922 [2024-07-15 21:44:39.720277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:49.922 [2024-07-15 21:44:39.720730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.922 [2024-07-15 21:44:39.720746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.182 [2024-07-15 21:44:39.731272] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.182 [2024-07-15 21:44:39.731719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.182 [2024-07-15 21:44:39.731735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.182 [2024-07-15 21:44:39.741968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.182 [2024-07-15 21:44:39.742255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.742271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.751781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.752192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.752208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.762378] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.762617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.762634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.771885] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.772112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.772133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.782619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.783136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.783152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.793593] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.793885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.793902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.802535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.802783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.802799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.812744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.813002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.813017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.822174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.822682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.822698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.832995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.833267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.833282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.843916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.844138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.844157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.855498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.855924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.855941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.866869] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.867172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.867188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.877097] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.877361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.877376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.886437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.886712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.886728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.895728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.896053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.896070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.905838] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.906060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.906075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.914704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.915054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.915070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.924279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.924609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.924625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.933631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.933997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.934013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.945046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.945363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.945379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.956035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.956390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.956407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.966777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.967168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.967184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.183 [2024-07-15 21:44:39.977451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.183 [2024-07-15 21:44:39.977887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.183 [2024-07-15 21:44:39.977903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:39.989030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:39.989478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:39.989495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.000228] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.000467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.000482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.011444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.011763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.011780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.022363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.022661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.022676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.032289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.032560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.032577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.041729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.041989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.042005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.051177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.051484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.051500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.061392] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.061609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.061624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.070325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.070722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.070738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.080361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.080659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.080676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.090978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.091286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.091302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.101988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.102331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.102347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.112863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.113242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.113262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.122909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.123200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.123216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.131878] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.132185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.132201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.142071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.142506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.142522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.151811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.152252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.152269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.161237] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.161510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.161526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.171371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.171754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.171770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.444 [2024-07-15 21:44:40.181309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24715a0) with pdu=0x2000190fef90 00:28:50.444 [2024-07-15 21:44:40.181651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.444 [2024-07-15 21:44:40.181666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.444 00:28:50.445 Latency(us) 00:28:50.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.445 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:50.445 nvme0n1 : 2.01 2830.80 353.85 0.00 0.00 5642.47 3959.47 19442.35 00:28:50.445 =================================================================================================================== 00:28:50.445 Total : 2830.80 353.85 0.00 0.00 5642.47 3959.47 19442.35 00:28:50.445 0 00:28:50.445 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:50.445 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:50.445 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:50.445 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:50.445 | .driver_specific 00:28:50.445 | .nvme_error 00:28:50.445 | .status_code 00:28:50.445 | .command_transient_transport_error' 00:28:50.705 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 183 > 0 )) 00:28:50.705 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2354496 00:28:50.705 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2354496 ']' 00:28:50.705 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2354496 00:28:50.705 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:50.705 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:50.705 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2354496 00:28:50.705 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:50.705 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:50.705 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2354496' 00:28:50.705 killing process with pid 2354496 00:28:50.705 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2354496 00:28:50.705 Received shutdown signal, test time was about 2.000000 seconds 00:28:50.705 00:28:50.705 Latency(us) 00:28:50.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.705 =================================================================================================================== 00:28:50.706 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.706 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2354496 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2352091 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2352091 ']' 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2352091 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2352091 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2352091' 00:28:50.966 killing process with pid 2352091 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2352091 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2352091 00:28:50.966 00:28:50.966 real 0m16.281s 00:28:50.966 user 0m32.156s 00:28:50.966 sys 0m3.113s 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:50.966 ************************************ 00:28:50.966 END TEST nvmf_digest_error 00:28:50.966 ************************************ 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:50.966 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:51.226 rmmod nvme_tcp 00:28:51.226 rmmod nvme_fabrics 00:28:51.226 rmmod nvme_keyring 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2352091 ']' 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2352091 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2352091 ']' 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2352091 00:28:51.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2352091) - No such process 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2352091 is not found' 00:28:51.226 Process with pid 2352091 is not found 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:51.226 21:44:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.137 21:44:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:53.137 00:28:53.137 real 0m41.904s 00:28:53.137 user 1m6.152s 00:28:53.137 sys 0m11.570s 00:28:53.137 21:44:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:53.137 21:44:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:53.137 ************************************ 00:28:53.137 END TEST nvmf_digest 00:28:53.137 ************************************ 00:28:53.399 21:44:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:53.399 21:44:42 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:28:53.399 21:44:42 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:28:53.399 21:44:42 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:28:53.399 21:44:42 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:53.399 21:44:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:53.399 21:44:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:53.399 21:44:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:53.399 ************************************ 00:28:53.399 START TEST nvmf_bdevperf 00:28:53.399 ************************************ 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:53.399 * Looking for test storage... 00:28:53.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:53.399 21:44:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:01.555 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:01.555 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:01.555 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:01.555 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:01.555 21:44:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:01.555 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:01.555 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:01.555 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:01.555 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:01.555 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:01.555 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:01.555 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:01.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:01.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:29:01.555 00:29:01.555 --- 10.0.0.2 ping statistics --- 00:29:01.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.555 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:29:01.555 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:01.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:01.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:29:01.555 00:29:01.555 --- 10.0.0.1 ping statistics --- 00:29:01.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.555 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2359189 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2359189 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2359189 ']' 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:01.556 21:44:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.556 [2024-07-15 21:44:50.362413] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:29:01.556 [2024-07-15 21:44:50.362482] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.556 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.556 [2024-07-15 21:44:50.450100] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:01.556 [2024-07-15 21:44:50.545143] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.556 [2024-07-15 21:44:50.545200] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.556 [2024-07-15 21:44:50.545208] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.556 [2024-07-15 21:44:50.545215] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.556 [2024-07-15 21:44:50.545221] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:01.556 [2024-07-15 21:44:50.545365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:01.556 [2024-07-15 21:44:50.545658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:01.556 [2024-07-15 21:44:50.545659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.556 [2024-07-15 21:44:51.191555] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.556 Malloc0 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.556 [2024-07-15 21:44:51.254503] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:01.556 { 00:29:01.556 "params": { 00:29:01.556 "name": "Nvme$subsystem", 00:29:01.556 "trtype": "$TEST_TRANSPORT", 00:29:01.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.556 "adrfam": "ipv4", 00:29:01.556 "trsvcid": "$NVMF_PORT", 00:29:01.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.556 "hdgst": ${hdgst:-false}, 00:29:01.556 "ddgst": ${ddgst:-false} 00:29:01.556 }, 00:29:01.556 "method": "bdev_nvme_attach_controller" 00:29:01.556 } 00:29:01.556 EOF 00:29:01.556 )") 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:01.556 21:44:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:01.556 "params": { 00:29:01.556 "name": "Nvme1", 00:29:01.556 "trtype": "tcp", 00:29:01.556 "traddr": "10.0.0.2", 00:29:01.556 "adrfam": "ipv4", 00:29:01.556 "trsvcid": "4420", 00:29:01.556 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:01.556 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:01.556 "hdgst": false, 00:29:01.556 "ddgst": false 00:29:01.556 }, 00:29:01.556 "method": "bdev_nvme_attach_controller" 00:29:01.556 }' 00:29:01.556 [2024-07-15 21:44:51.314468] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:29:01.556 [2024-07-15 21:44:51.314525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359536 ] 00:29:01.556 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.817 [2024-07-15 21:44:51.373657] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.817 [2024-07-15 21:44:51.437955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.817 Running I/O for 1 seconds... 00:29:03.202 00:29:03.202 Latency(us) 00:29:03.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.202 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:03.202 Verification LBA range: start 0x0 length 0x4000 00:29:03.202 Nvme1n1 : 1.00 8988.60 35.11 0.00 0.00 14181.01 3085.65 16056.32 00:29:03.202 =================================================================================================================== 00:29:03.202 Total : 8988.60 35.11 0.00 0.00 14181.01 3085.65 16056.32 00:29:03.202 21:44:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2359822 00:29:03.202 21:44:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:03.202 21:44:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:03.202 21:44:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:03.202 21:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:03.202 21:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:03.202 21:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:03.202 21:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:03.202 { 00:29:03.202 "params": { 00:29:03.202 "name": "Nvme$subsystem", 00:29:03.202 "trtype": "$TEST_TRANSPORT", 00:29:03.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:03.203 "adrfam": "ipv4", 00:29:03.203 "trsvcid": "$NVMF_PORT", 00:29:03.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:03.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:03.203 "hdgst": ${hdgst:-false}, 00:29:03.203 "ddgst": ${ddgst:-false} 00:29:03.203 }, 00:29:03.203 "method": "bdev_nvme_attach_controller" 00:29:03.203 } 00:29:03.203 EOF 00:29:03.203 )") 00:29:03.203 21:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:03.203 21:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:03.203 21:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:03.203 21:44:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:03.203 "params": { 00:29:03.203 "name": "Nvme1", 00:29:03.203 "trtype": "tcp", 00:29:03.203 "traddr": "10.0.0.2", 00:29:03.203 "adrfam": "ipv4", 00:29:03.203 "trsvcid": "4420", 00:29:03.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:03.203 "hdgst": false, 00:29:03.203 "ddgst": false 00:29:03.203 }, 00:29:03.203 "method": "bdev_nvme_attach_controller" 00:29:03.203 }' 00:29:03.203 [2024-07-15 21:44:52.771222] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:29:03.203 [2024-07-15 21:44:52.771277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359822 ] 00:29:03.203 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.203 [2024-07-15 21:44:52.829907] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.203 [2024-07-15 21:44:52.893774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.463 Running I/O for 15 seconds... 00:29:06.008 21:44:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2359189 00:29:06.008 21:44:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:06.008 [2024-07-15 21:44:55.739959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.008 [2024-07-15 21:44:55.740001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.008 [2024-07-15 21:44:55.740034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.008 [2024-07-15 21:44:55.740055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.008 [2024-07-15 21:44:55.740073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.008 [2024-07-15 21:44:55.740090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.008 [2024-07-15 21:44:55.740111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.008 [2024-07-15 21:44:55.740229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.008 [2024-07-15 21:44:55.740252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.008 [2024-07-15 21:44:55.740269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.008 [2024-07-15 21:44:55.740286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.008 [2024-07-15 21:44:55.740304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.008 [2024-07-15 21:44:55.740320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.008 [2024-07-15 21:44:55.740337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.008 [2024-07-15 21:44:55.740353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.008 [2024-07-15 21:44:55.740372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.008 [2024-07-15 21:44:55.740389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.008 [2024-07-15 21:44:55.740407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.008 [2024-07-15 21:44:55.740424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.008 [2024-07-15 21:44:55.740440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.008 [2024-07-15 21:44:55.740459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.008 [2024-07-15 21:44:55.740477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.008 [2024-07-15 21:44:55.740493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.008 [2024-07-15 21:44:55.740510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.008 [2024-07-15 21:44:55.740526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.008 [2024-07-15 21:44:55.740542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.008 [2024-07-15 21:44:55.740552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.008 [2024-07-15 21:44:55.740559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.740946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.009 [2024-07-15 21:44:55.740962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.009 [2024-07-15 21:44:55.740979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.740988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.009 [2024-07-15 21:44:55.740995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.009 [2024-07-15 21:44:55.741012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.009 [2024-07-15 21:44:55.741028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.009 [2024-07-15 21:44:55.741045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.009 [2024-07-15 21:44:55.741062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.741078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.741096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.741112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.741133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.741150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.741166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.741183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.741199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.009 [2024-07-15 21:44:55.741216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.009 [2024-07-15 21:44:55.741233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.009 [2024-07-15 21:44:55.741250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.009 [2024-07-15 21:44:55.741266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.009 [2024-07-15 21:44:55.741276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.009 [2024-07-15 21:44:55.741283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.010 [2024-07-15 21:44:55.741351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.010 [2024-07-15 21:44:55.741500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.010 [2024-07-15 21:44:55.741518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.010 [2024-07-15 21:44:55.741534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.010 [2024-07-15 21:44:55.741552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.010 [2024-07-15 21:44:55.741568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.010 [2024-07-15 21:44:55.741585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.010 [2024-07-15 21:44:55.741618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.010 [2024-07-15 21:44:55.741985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.010 [2024-07-15 21:44:55.741994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 21:44:55.742001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 21:44:55.742018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 21:44:55.742034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 21:44:55.742053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 21:44:55.742070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 21:44:55.742086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 21:44:55.742103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 21:44:55.742119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 21:44:55.742142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 21:44:55.742161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 21:44:55.742177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 21:44:55.742194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 21:44:55.742210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 21:44:55.742226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 21:44:55.742242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.011 [2024-07-15 21:44:55.742258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5d550 is same with the state(5) to be set 00:29:06.011 [2024-07-15 21:44:55.742276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:06.011 [2024-07-15 21:44:55.742282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:06.011 [2024-07-15 21:44:55.742289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87768 len:8 PRP1 0x0 PRP2 0x0 00:29:06.011 [2024-07-15 21:44:55.742299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742338] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f5d550 was disconnected and freed. reset controller. 00:29:06.011 [2024-07-15 21:44:55.742383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.011 [2024-07-15 21:44:55.742393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.011 [2024-07-15 21:44:55.742409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.011 [2024-07-15 21:44:55.742424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.011 [2024-07-15 21:44:55.742441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.011 [2024-07-15 21:44:55.742448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.011 [2024-07-15 21:44:55.745961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.011 [2024-07-15 21:44:55.745982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.011 [2024-07-15 21:44:55.746833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.011 [2024-07-15 21:44:55.746850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.011 [2024-07-15 21:44:55.746859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.011 [2024-07-15 21:44:55.747079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.011 [2024-07-15 21:44:55.747304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.011 [2024-07-15 21:44:55.747312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.011 [2024-07-15 21:44:55.747321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.011 [2024-07-15 21:44:55.750871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.011 [2024-07-15 21:44:55.760085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.011 [2024-07-15 21:44:55.760808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.011 [2024-07-15 21:44:55.760847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.011 [2024-07-15 21:44:55.760858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.011 [2024-07-15 21:44:55.761099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.011 [2024-07-15 21:44:55.761332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.011 [2024-07-15 21:44:55.761342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.011 [2024-07-15 21:44:55.761350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.011 [2024-07-15 21:44:55.764908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.011 [2024-07-15 21:44:55.773921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.011 [2024-07-15 21:44:55.774648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.011 [2024-07-15 21:44:55.774685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.011 [2024-07-15 21:44:55.774696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.011 [2024-07-15 21:44:55.774936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.011 [2024-07-15 21:44:55.775188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.011 [2024-07-15 21:44:55.775199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.011 [2024-07-15 21:44:55.775215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.011 [2024-07-15 21:44:55.778770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.011 [2024-07-15 21:44:55.787784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.011 [2024-07-15 21:44:55.788520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.011 [2024-07-15 21:44:55.788558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.011 [2024-07-15 21:44:55.788570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.011 [2024-07-15 21:44:55.788811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.011 [2024-07-15 21:44:55.789033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.011 [2024-07-15 21:44:55.789041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.011 [2024-07-15 21:44:55.789048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.011 [2024-07-15 21:44:55.792614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.011 [2024-07-15 21:44:55.801631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.011 [2024-07-15 21:44:55.802421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.011 [2024-07-15 21:44:55.802458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.011 [2024-07-15 21:44:55.802469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.011 [2024-07-15 21:44:55.802708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.011 [2024-07-15 21:44:55.802931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.011 [2024-07-15 21:44:55.802939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.011 [2024-07-15 21:44:55.802946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.011 [2024-07-15 21:44:55.806507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.274 [2024-07-15 21:44:55.815509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.274 [2024-07-15 21:44:55.816247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.274 [2024-07-15 21:44:55.816284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.274 [2024-07-15 21:44:55.816296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.274 [2024-07-15 21:44:55.816539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.274 [2024-07-15 21:44:55.816761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.274 [2024-07-15 21:44:55.816769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.274 [2024-07-15 21:44:55.816777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.274 [2024-07-15 21:44:55.820336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.274 [2024-07-15 21:44:55.829346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.274 [2024-07-15 21:44:55.830091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.274 [2024-07-15 21:44:55.830138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.274 [2024-07-15 21:44:55.830150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.274 [2024-07-15 21:44:55.830389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.274 [2024-07-15 21:44:55.830611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.274 [2024-07-15 21:44:55.830620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.274 [2024-07-15 21:44:55.830627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.274 [2024-07-15 21:44:55.834179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.274 [2024-07-15 21:44:55.843177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.274 [2024-07-15 21:44:55.843870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.274 [2024-07-15 21:44:55.843888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.274 [2024-07-15 21:44:55.843896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.274 [2024-07-15 21:44:55.844116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.274 [2024-07-15 21:44:55.844340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.274 [2024-07-15 21:44:55.844349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.274 [2024-07-15 21:44:55.844356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.274 [2024-07-15 21:44:55.847899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.274 [2024-07-15 21:44:55.857107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.274 [2024-07-15 21:44:55.857810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.274 [2024-07-15 21:44:55.857847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.274 [2024-07-15 21:44:55.857857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.274 [2024-07-15 21:44:55.858096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.274 [2024-07-15 21:44:55.858328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.274 [2024-07-15 21:44:55.858337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.274 [2024-07-15 21:44:55.858345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.274 [2024-07-15 21:44:55.861899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.274 [2024-07-15 21:44:55.870906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.274 [2024-07-15 21:44:55.871631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.274 [2024-07-15 21:44:55.871668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.274 [2024-07-15 21:44:55.871678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.274 [2024-07-15 21:44:55.871917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.274 [2024-07-15 21:44:55.872155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.274 [2024-07-15 21:44:55.872164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.274 [2024-07-15 21:44:55.872172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.274 [2024-07-15 21:44:55.875736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.274 [2024-07-15 21:44:55.884819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.274 [2024-07-15 21:44:55.885608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.274 [2024-07-15 21:44:55.885645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.274 [2024-07-15 21:44:55.885656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.274 [2024-07-15 21:44:55.885895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.274 [2024-07-15 21:44:55.886117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.274 [2024-07-15 21:44:55.886135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.274 [2024-07-15 21:44:55.886143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.274 [2024-07-15 21:44:55.889694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.274 [2024-07-15 21:44:55.898692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.274 [2024-07-15 21:44:55.899454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.274 [2024-07-15 21:44:55.899491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.274 [2024-07-15 21:44:55.899502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.274 [2024-07-15 21:44:55.899741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.274 [2024-07-15 21:44:55.899965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.274 [2024-07-15 21:44:55.899973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.274 [2024-07-15 21:44:55.899980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.274 [2024-07-15 21:44:55.903541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.274 [2024-07-15 21:44:55.912540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.274 [2024-07-15 21:44:55.913177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.274 [2024-07-15 21:44:55.913214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.274 [2024-07-15 21:44:55.913226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.274 [2024-07-15 21:44:55.913467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.274 [2024-07-15 21:44:55.913690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.274 [2024-07-15 21:44:55.913698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.274 [2024-07-15 21:44:55.913705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.274 [2024-07-15 21:44:55.917271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.274 [2024-07-15 21:44:55.926480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.274 [2024-07-15 21:44:55.927236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.274 [2024-07-15 21:44:55.927273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.275 [2024-07-15 21:44:55.927285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.275 [2024-07-15 21:44:55.927526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.275 [2024-07-15 21:44:55.927748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.275 [2024-07-15 21:44:55.927757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.275 [2024-07-15 21:44:55.927765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.275 [2024-07-15 21:44:55.931330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.275 [2024-07-15 21:44:55.940330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.275 [2024-07-15 21:44:55.941062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.275 [2024-07-15 21:44:55.941098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.275 [2024-07-15 21:44:55.941111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.275 [2024-07-15 21:44:55.941361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.275 [2024-07-15 21:44:55.941585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.275 [2024-07-15 21:44:55.941593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.275 [2024-07-15 21:44:55.941601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.275 [2024-07-15 21:44:55.945157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.275 [2024-07-15 21:44:55.954154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.275 [2024-07-15 21:44:55.954890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.275 [2024-07-15 21:44:55.954927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.275 [2024-07-15 21:44:55.954939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.275 [2024-07-15 21:44:55.955186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.275 [2024-07-15 21:44:55.955409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.275 [2024-07-15 21:44:55.955418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.275 [2024-07-15 21:44:55.955426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.275 [2024-07-15 21:44:55.958976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.275 [2024-07-15 21:44:55.967976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.275 [2024-07-15 21:44:55.968648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.275 [2024-07-15 21:44:55.968666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.275 [2024-07-15 21:44:55.968678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.275 [2024-07-15 21:44:55.968898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.275 [2024-07-15 21:44:55.969117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.275 [2024-07-15 21:44:55.969131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.275 [2024-07-15 21:44:55.969138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.275 [2024-07-15 21:44:55.972682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.275 [2024-07-15 21:44:55.981897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.275 [2024-07-15 21:44:55.982561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.275 [2024-07-15 21:44:55.982577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.275 [2024-07-15 21:44:55.982585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.275 [2024-07-15 21:44:55.982803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.275 [2024-07-15 21:44:55.983022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.275 [2024-07-15 21:44:55.983029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.275 [2024-07-15 21:44:55.983036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.275 [2024-07-15 21:44:55.986582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.275 [2024-07-15 21:44:55.995783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.275 [2024-07-15 21:44:55.996119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.275 [2024-07-15 21:44:55.996148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.275 [2024-07-15 21:44:55.996156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.275 [2024-07-15 21:44:55.996378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.275 [2024-07-15 21:44:55.996597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.275 [2024-07-15 21:44:55.996605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.275 [2024-07-15 21:44:55.996612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.275 [2024-07-15 21:44:56.000164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.275 [2024-07-15 21:44:56.009575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.275 [2024-07-15 21:44:56.010240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.275 [2024-07-15 21:44:56.010277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.275 [2024-07-15 21:44:56.010289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.275 [2024-07-15 21:44:56.010532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.275 [2024-07-15 21:44:56.010755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.275 [2024-07-15 21:44:56.010768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.275 [2024-07-15 21:44:56.010775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.275 [2024-07-15 21:44:56.014335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.275 [2024-07-15 21:44:56.023543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.275 [2024-07-15 21:44:56.024237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.275 [2024-07-15 21:44:56.024274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.275 [2024-07-15 21:44:56.024286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.275 [2024-07-15 21:44:56.024528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.275 [2024-07-15 21:44:56.024751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.275 [2024-07-15 21:44:56.024760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.275 [2024-07-15 21:44:56.024768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.275 [2024-07-15 21:44:56.028327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.275 [2024-07-15 21:44:56.037347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.275 [2024-07-15 21:44:56.038021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.275 [2024-07-15 21:44:56.038058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.275 [2024-07-15 21:44:56.038070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.275 [2024-07-15 21:44:56.038320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.275 [2024-07-15 21:44:56.038544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.275 [2024-07-15 21:44:56.038552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.275 [2024-07-15 21:44:56.038559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.275 [2024-07-15 21:44:56.042107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.275 [2024-07-15 21:44:56.051323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.275 [2024-07-15 21:44:56.052104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.275 [2024-07-15 21:44:56.052149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.275 [2024-07-15 21:44:56.052160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.275 [2024-07-15 21:44:56.052398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.275 [2024-07-15 21:44:56.052621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.275 [2024-07-15 21:44:56.052629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.275 [2024-07-15 21:44:56.052636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.275 [2024-07-15 21:44:56.056190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.275 [2024-07-15 21:44:56.065198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.275 [2024-07-15 21:44:56.065793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.275 [2024-07-15 21:44:56.065830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.275 [2024-07-15 21:44:56.065840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.275 [2024-07-15 21:44:56.066079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.275 [2024-07-15 21:44:56.066309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.275 [2024-07-15 21:44:56.066318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.275 [2024-07-15 21:44:56.066326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.275 [2024-07-15 21:44:56.069877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.537 [2024-07-15 21:44:56.079126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.537 [2024-07-15 21:44:56.079855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.537 [2024-07-15 21:44:56.079892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.537 [2024-07-15 21:44:56.079902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.537 [2024-07-15 21:44:56.080148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.537 [2024-07-15 21:44:56.080371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.537 [2024-07-15 21:44:56.080379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.537 [2024-07-15 21:44:56.080387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.537 [2024-07-15 21:44:56.083938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.537 [2024-07-15 21:44:56.092937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.537 [2024-07-15 21:44:56.093670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.537 [2024-07-15 21:44:56.093688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.538 [2024-07-15 21:44:56.093696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.538 [2024-07-15 21:44:56.093915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.538 [2024-07-15 21:44:56.094140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.538 [2024-07-15 21:44:56.094148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.538 [2024-07-15 21:44:56.094155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.538 [2024-07-15 21:44:56.097703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.538 [2024-07-15 21:44:56.106909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.538 [2024-07-15 21:44:56.107678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.538 [2024-07-15 21:44:56.107715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.538 [2024-07-15 21:44:56.107727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.538 [2024-07-15 21:44:56.107971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.538 [2024-07-15 21:44:56.108200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.538 [2024-07-15 21:44:56.108209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.538 [2024-07-15 21:44:56.108217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.538 [2024-07-15 21:44:56.111769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.538 [2024-07-15 21:44:56.120766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.538 [2024-07-15 21:44:56.121517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.538 [2024-07-15 21:44:56.121554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.538 [2024-07-15 21:44:56.121566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.538 [2024-07-15 21:44:56.121806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.538 [2024-07-15 21:44:56.122029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.538 [2024-07-15 21:44:56.122037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.538 [2024-07-15 21:44:56.122045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.538 [2024-07-15 21:44:56.125602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.538 [2024-07-15 21:44:56.134609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.538 [2024-07-15 21:44:56.135456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.538 [2024-07-15 21:44:56.135493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.538 [2024-07-15 21:44:56.135504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.538 [2024-07-15 21:44:56.135743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.538 [2024-07-15 21:44:56.135965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.538 [2024-07-15 21:44:56.135974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.538 [2024-07-15 21:44:56.135981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.538 [2024-07-15 21:44:56.139539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.538 [2024-07-15 21:44:56.148540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.538 [2024-07-15 21:44:56.149184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.538 [2024-07-15 21:44:56.149203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.538 [2024-07-15 21:44:56.149210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.538 [2024-07-15 21:44:56.149431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.538 [2024-07-15 21:44:56.149650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.538 [2024-07-15 21:44:56.149658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.538 [2024-07-15 21:44:56.149669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.538 [2024-07-15 21:44:56.153223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.538 [2024-07-15 21:44:56.162427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.538 [2024-07-15 21:44:56.163083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.538 [2024-07-15 21:44:56.163120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.538 [2024-07-15 21:44:56.163140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.538 [2024-07-15 21:44:56.163383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.538 [2024-07-15 21:44:56.163606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.538 [2024-07-15 21:44:56.163614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.538 [2024-07-15 21:44:56.163621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.538 [2024-07-15 21:44:56.167173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.538 [2024-07-15 21:44:56.176392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.538 [2024-07-15 21:44:56.177145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.538 [2024-07-15 21:44:56.177182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.538 [2024-07-15 21:44:56.177192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.538 [2024-07-15 21:44:56.177431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.538 [2024-07-15 21:44:56.177654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.538 [2024-07-15 21:44:56.177663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.538 [2024-07-15 21:44:56.177670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.538 [2024-07-15 21:44:56.181230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.538 [2024-07-15 21:44:56.190227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.538 [2024-07-15 21:44:56.190983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.538 [2024-07-15 21:44:56.191020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.538 [2024-07-15 21:44:56.191031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.538 [2024-07-15 21:44:56.191277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.538 [2024-07-15 21:44:56.191501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.538 [2024-07-15 21:44:56.191510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.538 [2024-07-15 21:44:56.191517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.538 [2024-07-15 21:44:56.195069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.538 [2024-07-15 21:44:56.204112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.538 [2024-07-15 21:44:56.204879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.538 [2024-07-15 21:44:56.204915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.538 [2024-07-15 21:44:56.204926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.538 [2024-07-15 21:44:56.205172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.538 [2024-07-15 21:44:56.205395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.538 [2024-07-15 21:44:56.205404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.538 [2024-07-15 21:44:56.205412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.538 [2024-07-15 21:44:56.208966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.538 [2024-07-15 21:44:56.217963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.538 [2024-07-15 21:44:56.218700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.538 [2024-07-15 21:44:56.218737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.538 [2024-07-15 21:44:56.218748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.538 [2024-07-15 21:44:56.218987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.538 [2024-07-15 21:44:56.219217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.538 [2024-07-15 21:44:56.219226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.538 [2024-07-15 21:44:56.219234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.538 [2024-07-15 21:44:56.222785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.538 [2024-07-15 21:44:56.231787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.538 [2024-07-15 21:44:56.232523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.538 [2024-07-15 21:44:56.232560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.538 [2024-07-15 21:44:56.232570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.538 [2024-07-15 21:44:56.232809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.538 [2024-07-15 21:44:56.233031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.538 [2024-07-15 21:44:56.233040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.538 [2024-07-15 21:44:56.233047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.538 [2024-07-15 21:44:56.236605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.538 [2024-07-15 21:44:56.245606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.538 [2024-07-15 21:44:56.246460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.539 [2024-07-15 21:44:56.246497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.539 [2024-07-15 21:44:56.246508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.539 [2024-07-15 21:44:56.246751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.539 [2024-07-15 21:44:56.246974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.539 [2024-07-15 21:44:56.246982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.539 [2024-07-15 21:44:56.246990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.539 [2024-07-15 21:44:56.250548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.539 [2024-07-15 21:44:56.259545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.539 [2024-07-15 21:44:56.260192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.539 [2024-07-15 21:44:56.260210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.539 [2024-07-15 21:44:56.260218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.539 [2024-07-15 21:44:56.260438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.539 [2024-07-15 21:44:56.260657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.539 [2024-07-15 21:44:56.260664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.539 [2024-07-15 21:44:56.260671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.539 [2024-07-15 21:44:56.264219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.539 [2024-07-15 21:44:56.273423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.539 [2024-07-15 21:44:56.274173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.539 [2024-07-15 21:44:56.274210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.539 [2024-07-15 21:44:56.274222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.539 [2024-07-15 21:44:56.274465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.539 [2024-07-15 21:44:56.274687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.539 [2024-07-15 21:44:56.274696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.539 [2024-07-15 21:44:56.274703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.539 [2024-07-15 21:44:56.278272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.539 [2024-07-15 21:44:56.287285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.539 [2024-07-15 21:44:56.287996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.539 [2024-07-15 21:44:56.288033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.539 [2024-07-15 21:44:56.288045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.539 [2024-07-15 21:44:56.288292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.539 [2024-07-15 21:44:56.288516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.539 [2024-07-15 21:44:56.288524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.539 [2024-07-15 21:44:56.288536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.539 [2024-07-15 21:44:56.292089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.539 [2024-07-15 21:44:56.301085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.539 [2024-07-15 21:44:56.301800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.539 [2024-07-15 21:44:56.301837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.539 [2024-07-15 21:44:56.301848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.539 [2024-07-15 21:44:56.302086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.539 [2024-07-15 21:44:56.302316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.539 [2024-07-15 21:44:56.302325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.539 [2024-07-15 21:44:56.302332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.539 [2024-07-15 21:44:56.305885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.539 [2024-07-15 21:44:56.314881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.539 [2024-07-15 21:44:56.315549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.539 [2024-07-15 21:44:56.315586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.539 [2024-07-15 21:44:56.315596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.539 [2024-07-15 21:44:56.315835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.539 [2024-07-15 21:44:56.316057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.539 [2024-07-15 21:44:56.316066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.539 [2024-07-15 21:44:56.316073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.539 [2024-07-15 21:44:56.319632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.539 [2024-07-15 21:44:56.328853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.539 [2024-07-15 21:44:56.329623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.539 [2024-07-15 21:44:56.329660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.539 [2024-07-15 21:44:56.329670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.539 [2024-07-15 21:44:56.329909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.539 [2024-07-15 21:44:56.330139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.539 [2024-07-15 21:44:56.330148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.539 [2024-07-15 21:44:56.330155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.539 [2024-07-15 21:44:56.333705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.801 [2024-07-15 21:44:56.342704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.801 [2024-07-15 21:44:56.343391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.801 [2024-07-15 21:44:56.343432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.801 [2024-07-15 21:44:56.343444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.801 [2024-07-15 21:44:56.343682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.801 [2024-07-15 21:44:56.343905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.801 [2024-07-15 21:44:56.343913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.801 [2024-07-15 21:44:56.343920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.801 [2024-07-15 21:44:56.347478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.801 [2024-07-15 21:44:56.356684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.801 [2024-07-15 21:44:56.357426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.801 [2024-07-15 21:44:56.357463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.801 [2024-07-15 21:44:56.357474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.801 [2024-07-15 21:44:56.357713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.801 [2024-07-15 21:44:56.357935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.801 [2024-07-15 21:44:56.357943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.801 [2024-07-15 21:44:56.357951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.801 [2024-07-15 21:44:56.361508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.801 [2024-07-15 21:44:56.370507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.801 [2024-07-15 21:44:56.371222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.801 [2024-07-15 21:44:56.371259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.801 [2024-07-15 21:44:56.371271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.801 [2024-07-15 21:44:56.371514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.801 [2024-07-15 21:44:56.371737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.801 [2024-07-15 21:44:56.371745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.801 [2024-07-15 21:44:56.371752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.801 [2024-07-15 21:44:56.375319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.801 [2024-07-15 21:44:56.384325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.801 [2024-07-15 21:44:56.385044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.801 [2024-07-15 21:44:56.385081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.801 [2024-07-15 21:44:56.385091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.801 [2024-07-15 21:44:56.385339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.801 [2024-07-15 21:44:56.385566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.801 [2024-07-15 21:44:56.385575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.801 [2024-07-15 21:44:56.385583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.801 [2024-07-15 21:44:56.389132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.801 [2024-07-15 21:44:56.398125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.801 [2024-07-15 21:44:56.398894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.801 [2024-07-15 21:44:56.398931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.801 [2024-07-15 21:44:56.398942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.801 [2024-07-15 21:44:56.399189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.801 [2024-07-15 21:44:56.399412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.801 [2024-07-15 21:44:56.399421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.801 [2024-07-15 21:44:56.399428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.801 [2024-07-15 21:44:56.402979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.801 [2024-07-15 21:44:56.411978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.801 [2024-07-15 21:44:56.412706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.802 [2024-07-15 21:44:56.412743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.802 [2024-07-15 21:44:56.412754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.802 [2024-07-15 21:44:56.412993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.802 [2024-07-15 21:44:56.413224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.802 [2024-07-15 21:44:56.413234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.802 [2024-07-15 21:44:56.413241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.802 [2024-07-15 21:44:56.416793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.802 [2024-07-15 21:44:56.425790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.802 [2024-07-15 21:44:56.426520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.802 [2024-07-15 21:44:56.426557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.802 [2024-07-15 21:44:56.426568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.802 [2024-07-15 21:44:56.426806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.802 [2024-07-15 21:44:56.427028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.802 [2024-07-15 21:44:56.427037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.802 [2024-07-15 21:44:56.427044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.802 [2024-07-15 21:44:56.430611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.802 [2024-07-15 21:44:56.439612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.802 [2024-07-15 21:44:56.440416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.802 [2024-07-15 21:44:56.440452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.802 [2024-07-15 21:44:56.440463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.802 [2024-07-15 21:44:56.440701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.802 [2024-07-15 21:44:56.440924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.802 [2024-07-15 21:44:56.440932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.802 [2024-07-15 21:44:56.440939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.802 [2024-07-15 21:44:56.444499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.802 [2024-07-15 21:44:56.453494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.802 [2024-07-15 21:44:56.453975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.802 [2024-07-15 21:44:56.453997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.802 [2024-07-15 21:44:56.454005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.802 [2024-07-15 21:44:56.454233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.802 [2024-07-15 21:44:56.454453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.802 [2024-07-15 21:44:56.454461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.802 [2024-07-15 21:44:56.454468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.802 [2024-07-15 21:44:56.458133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.802 [2024-07-15 21:44:56.467337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.802 [2024-07-15 21:44:56.468094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.802 [2024-07-15 21:44:56.468138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.802 [2024-07-15 21:44:56.468152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.802 [2024-07-15 21:44:56.468392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.802 [2024-07-15 21:44:56.468614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.802 [2024-07-15 21:44:56.468622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.802 [2024-07-15 21:44:56.468630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.802 [2024-07-15 21:44:56.472183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.802 [2024-07-15 21:44:56.481191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.802 [2024-07-15 21:44:56.481927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.802 [2024-07-15 21:44:56.481964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.802 [2024-07-15 21:44:56.481983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.802 [2024-07-15 21:44:56.482231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.802 [2024-07-15 21:44:56.482455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.802 [2024-07-15 21:44:56.482463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.802 [2024-07-15 21:44:56.482470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.802 [2024-07-15 21:44:56.486020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.802 [2024-07-15 21:44:56.495022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.802 [2024-07-15 21:44:56.495767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.802 [2024-07-15 21:44:56.495804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.802 [2024-07-15 21:44:56.495815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.802 [2024-07-15 21:44:56.496053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.802 [2024-07-15 21:44:56.496285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.802 [2024-07-15 21:44:56.496294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.802 [2024-07-15 21:44:56.496302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.802 [2024-07-15 21:44:56.499853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.802 [2024-07-15 21:44:56.508850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.802 [2024-07-15 21:44:56.509594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.802 [2024-07-15 21:44:56.509631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.802 [2024-07-15 21:44:56.509641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.802 [2024-07-15 21:44:56.509880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.802 [2024-07-15 21:44:56.510103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.802 [2024-07-15 21:44:56.510111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.802 [2024-07-15 21:44:56.510119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.802 [2024-07-15 21:44:56.513674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.802 [2024-07-15 21:44:56.522709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.802 [2024-07-15 21:44:56.523445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.802 [2024-07-15 21:44:56.523482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.802 [2024-07-15 21:44:56.523492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.802 [2024-07-15 21:44:56.523731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.802 [2024-07-15 21:44:56.523953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.802 [2024-07-15 21:44:56.523965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.802 [2024-07-15 21:44:56.523973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.802 [2024-07-15 21:44:56.527532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.802 [2024-07-15 21:44:56.536534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.802 [2024-07-15 21:44:56.537224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.802 [2024-07-15 21:44:56.537261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.802 [2024-07-15 21:44:56.537273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.802 [2024-07-15 21:44:56.537513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.802 [2024-07-15 21:44:56.537736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.802 [2024-07-15 21:44:56.537744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.802 [2024-07-15 21:44:56.537751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.802 [2024-07-15 21:44:56.541309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.802 [2024-07-15 21:44:56.550511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.802 [2024-07-15 21:44:56.551203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.802 [2024-07-15 21:44:56.551240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.802 [2024-07-15 21:44:56.551251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.802 [2024-07-15 21:44:56.551489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.802 [2024-07-15 21:44:56.551712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.802 [2024-07-15 21:44:56.551720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.802 [2024-07-15 21:44:56.551728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.802 [2024-07-15 21:44:56.555287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.802 [2024-07-15 21:44:56.564492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.802 [2024-07-15 21:44:56.565213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.803 [2024-07-15 21:44:56.565250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.803 [2024-07-15 21:44:56.565260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.803 [2024-07-15 21:44:56.565499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.803 [2024-07-15 21:44:56.565722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.803 [2024-07-15 21:44:56.565730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.803 [2024-07-15 21:44:56.565737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.803 [2024-07-15 21:44:56.569296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.803 [2024-07-15 21:44:56.578312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.803 [2024-07-15 21:44:56.579037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.803 [2024-07-15 21:44:56.579073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.803 [2024-07-15 21:44:56.579083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.803 [2024-07-15 21:44:56.579331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.803 [2024-07-15 21:44:56.579554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.803 [2024-07-15 21:44:56.579563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.803 [2024-07-15 21:44:56.579570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.803 [2024-07-15 21:44:56.583119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.803 [2024-07-15 21:44:56.592116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.803 [2024-07-15 21:44:56.592845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.803 [2024-07-15 21:44:56.592882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:06.803 [2024-07-15 21:44:56.592892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:06.803 [2024-07-15 21:44:56.593139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:06.803 [2024-07-15 21:44:56.593363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.803 [2024-07-15 21:44:56.593371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.803 [2024-07-15 21:44:56.593379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.803 [2024-07-15 21:44:56.596928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.065 [2024-07-15 21:44:56.605924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.065 [2024-07-15 21:44:56.606697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.065 [2024-07-15 21:44:56.606734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.065 [2024-07-15 21:44:56.606745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.065 [2024-07-15 21:44:56.606984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.065 [2024-07-15 21:44:56.607214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.065 [2024-07-15 21:44:56.607223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.065 [2024-07-15 21:44:56.607230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.065 [2024-07-15 21:44:56.610783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.065 [2024-07-15 21:44:56.619781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.065 [2024-07-15 21:44:56.620511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.065 [2024-07-15 21:44:56.620548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.065 [2024-07-15 21:44:56.620559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.065 [2024-07-15 21:44:56.620802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.065 [2024-07-15 21:44:56.621025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.065 [2024-07-15 21:44:56.621033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.065 [2024-07-15 21:44:56.621040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.065 [2024-07-15 21:44:56.624598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.065 [2024-07-15 21:44:56.633600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.065 [2024-07-15 21:44:56.634331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.065 [2024-07-15 21:44:56.634367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.065 [2024-07-15 21:44:56.634377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.065 [2024-07-15 21:44:56.634616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.065 [2024-07-15 21:44:56.634839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.065 [2024-07-15 21:44:56.634847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.065 [2024-07-15 21:44:56.634855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.065 [2024-07-15 21:44:56.638416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.065 [2024-07-15 21:44:56.647410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.065 [2024-07-15 21:44:56.648142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.065 [2024-07-15 21:44:56.648178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.065 [2024-07-15 21:44:56.648191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.065 [2024-07-15 21:44:56.648431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.065 [2024-07-15 21:44:56.648653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.065 [2024-07-15 21:44:56.648661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.065 [2024-07-15 21:44:56.648669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.065 [2024-07-15 21:44:56.652224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.065 [2024-07-15 21:44:56.661219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.065 [2024-07-15 21:44:56.661948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.065 [2024-07-15 21:44:56.661984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.065 [2024-07-15 21:44:56.661995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.065 [2024-07-15 21:44:56.662242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.065 [2024-07-15 21:44:56.662465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.065 [2024-07-15 21:44:56.662473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.065 [2024-07-15 21:44:56.662485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.065 [2024-07-15 21:44:56.666036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.065 [2024-07-15 21:44:56.675033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.065 [2024-07-15 21:44:56.675633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.066 [2024-07-15 21:44:56.675670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.066 [2024-07-15 21:44:56.675680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.066 [2024-07-15 21:44:56.675919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.066 [2024-07-15 21:44:56.676151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.066 [2024-07-15 21:44:56.676160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.066 [2024-07-15 21:44:56.676168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.066 [2024-07-15 21:44:56.679718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.066 [2024-07-15 21:44:56.688920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.066 [2024-07-15 21:44:56.689600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.066 [2024-07-15 21:44:56.689636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.066 [2024-07-15 21:44:56.689647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.066 [2024-07-15 21:44:56.689886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.066 [2024-07-15 21:44:56.690108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.066 [2024-07-15 21:44:56.690116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.066 [2024-07-15 21:44:56.690133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.066 [2024-07-15 21:44:56.693684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.066 [2024-07-15 21:44:56.702891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.066 [2024-07-15 21:44:56.703606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.066 [2024-07-15 21:44:56.703642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.066 [2024-07-15 21:44:56.703653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.066 [2024-07-15 21:44:56.703892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.066 [2024-07-15 21:44:56.704115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.066 [2024-07-15 21:44:56.704133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.066 [2024-07-15 21:44:56.704141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.066 [2024-07-15 21:44:56.707693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.066 [2024-07-15 21:44:56.716687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.066 [2024-07-15 21:44:56.717463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.066 [2024-07-15 21:44:56.717500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.066 [2024-07-15 21:44:56.717511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.066 [2024-07-15 21:44:56.717749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.066 [2024-07-15 21:44:56.717972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.066 [2024-07-15 21:44:56.717980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.066 [2024-07-15 21:44:56.717988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.066 [2024-07-15 21:44:56.721545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.066 [2024-07-15 21:44:56.730553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.066 [2024-07-15 21:44:56.731247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.066 [2024-07-15 21:44:56.731283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.066 [2024-07-15 21:44:56.731294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.066 [2024-07-15 21:44:56.731532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.066 [2024-07-15 21:44:56.731755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.066 [2024-07-15 21:44:56.731764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.066 [2024-07-15 21:44:56.731771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.066 [2024-07-15 21:44:56.735330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.066 [2024-07-15 21:44:56.744534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.066 [2024-07-15 21:44:56.745223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.066 [2024-07-15 21:44:56.745259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.066 [2024-07-15 21:44:56.745271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.066 [2024-07-15 21:44:56.745513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.066 [2024-07-15 21:44:56.745736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.066 [2024-07-15 21:44:56.745745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.066 [2024-07-15 21:44:56.745752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.066 [2024-07-15 21:44:56.749313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.066 [2024-07-15 21:44:56.758515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.066 [2024-07-15 21:44:56.759223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.066 [2024-07-15 21:44:56.759260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.066 [2024-07-15 21:44:56.759272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.066 [2024-07-15 21:44:56.759514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.066 [2024-07-15 21:44:56.759741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.066 [2024-07-15 21:44:56.759749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.066 [2024-07-15 21:44:56.759757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.066 [2024-07-15 21:44:56.763317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.066 [2024-07-15 21:44:56.772314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.066 [2024-07-15 21:44:56.773087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.066 [2024-07-15 21:44:56.773131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.066 [2024-07-15 21:44:56.773142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.066 [2024-07-15 21:44:56.773381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.066 [2024-07-15 21:44:56.773604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.066 [2024-07-15 21:44:56.773612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.066 [2024-07-15 21:44:56.773619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.066 [2024-07-15 21:44:56.777182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.066 [2024-07-15 21:44:56.786296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.066 [2024-07-15 21:44:56.787012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.066 [2024-07-15 21:44:56.787048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.066 [2024-07-15 21:44:56.787059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.066 [2024-07-15 21:44:56.787306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.066 [2024-07-15 21:44:56.787530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.066 [2024-07-15 21:44:56.787538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.066 [2024-07-15 21:44:56.787545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.066 [2024-07-15 21:44:56.791093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.066 [2024-07-15 21:44:56.800300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.066 [2024-07-15 21:44:56.801049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.066 [2024-07-15 21:44:56.801086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.066 [2024-07-15 21:44:56.801098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.066 [2024-07-15 21:44:56.801346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.066 [2024-07-15 21:44:56.801569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.066 [2024-07-15 21:44:56.801578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.066 [2024-07-15 21:44:56.801586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.066 [2024-07-15 21:44:56.805143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.066 [2024-07-15 21:44:56.814140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.066 [2024-07-15 21:44:56.814913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.066 [2024-07-15 21:44:56.814950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.066 [2024-07-15 21:44:56.814962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.066 [2024-07-15 21:44:56.815208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.066 [2024-07-15 21:44:56.815432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.066 [2024-07-15 21:44:56.815440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.066 [2024-07-15 21:44:56.815448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.066 [2024-07-15 21:44:56.818997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.066 [2024-07-15 21:44:56.827994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.067 [2024-07-15 21:44:56.828641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.067 [2024-07-15 21:44:56.828659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.067 [2024-07-15 21:44:56.828667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.067 [2024-07-15 21:44:56.828886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.067 [2024-07-15 21:44:56.829105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.067 [2024-07-15 21:44:56.829113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.067 [2024-07-15 21:44:56.829120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.067 [2024-07-15 21:44:56.832678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.067 [2024-07-15 21:44:56.841926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.067 [2024-07-15 21:44:56.842602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.067 [2024-07-15 21:44:56.842618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.067 [2024-07-15 21:44:56.842626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.067 [2024-07-15 21:44:56.842845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.067 [2024-07-15 21:44:56.843064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.067 [2024-07-15 21:44:56.843072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.067 [2024-07-15 21:44:56.843079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.067 [2024-07-15 21:44:56.846625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.067 [2024-07-15 21:44:56.855831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.067 [2024-07-15 21:44:56.856456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.067 [2024-07-15 21:44:56.856477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.067 [2024-07-15 21:44:56.856485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.067 [2024-07-15 21:44:56.856704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.067 [2024-07-15 21:44:56.856922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.067 [2024-07-15 21:44:56.856930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.067 [2024-07-15 21:44:56.856936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.067 [2024-07-15 21:44:56.860487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.391 [2024-07-15 21:44:56.869700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.391 [2024-07-15 21:44:56.870316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.391 [2024-07-15 21:44:56.870331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.391 [2024-07-15 21:44:56.870338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.391 [2024-07-15 21:44:56.870557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.391 [2024-07-15 21:44:56.870776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.391 [2024-07-15 21:44:56.870784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.391 [2024-07-15 21:44:56.870791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.391 [2024-07-15 21:44:56.874337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.391 [2024-07-15 21:44:56.883547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.391 [2024-07-15 21:44:56.884228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.391 [2024-07-15 21:44:56.884264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.391 [2024-07-15 21:44:56.884276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.391 [2024-07-15 21:44:56.884518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.391 [2024-07-15 21:44:56.884741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.391 [2024-07-15 21:44:56.884750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.391 [2024-07-15 21:44:56.884757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.391 [2024-07-15 21:44:56.888315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.391 [2024-07-15 21:44:56.897518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.391 [2024-07-15 21:44:56.898223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.391 [2024-07-15 21:44:56.898260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.391 [2024-07-15 21:44:56.898271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.391 [2024-07-15 21:44:56.898509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.391 [2024-07-15 21:44:56.898736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.391 [2024-07-15 21:44:56.898745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.391 [2024-07-15 21:44:56.898752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.391 [2024-07-15 21:44:56.902312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.391 [2024-07-15 21:44:56.911517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.391 [2024-07-15 21:44:56.912222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.391 [2024-07-15 21:44:56.912259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.391 [2024-07-15 21:44:56.912270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.391 [2024-07-15 21:44:56.912508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.391 [2024-07-15 21:44:56.912731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.391 [2024-07-15 21:44:56.912739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.391 [2024-07-15 21:44:56.912746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.391 [2024-07-15 21:44:56.916307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.391 [2024-07-15 21:44:56.925514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.391 [2024-07-15 21:44:56.926224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.391 [2024-07-15 21:44:56.926261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.391 [2024-07-15 21:44:56.926273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.391 [2024-07-15 21:44:56.926515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.391 [2024-07-15 21:44:56.926738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.391 [2024-07-15 21:44:56.926747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.391 [2024-07-15 21:44:56.926754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.391 [2024-07-15 21:44:56.930317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.391 [2024-07-15 21:44:56.939319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.391 [2024-07-15 21:44:56.939963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.391 [2024-07-15 21:44:56.939981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.391 [2024-07-15 21:44:56.939988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.391 [2024-07-15 21:44:56.940213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.391 [2024-07-15 21:44:56.940433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.391 [2024-07-15 21:44:56.940441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.391 [2024-07-15 21:44:56.940447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.391 [2024-07-15 21:44:56.943990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.391 [2024-07-15 21:44:56.953196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.391 [2024-07-15 21:44:56.953859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.391 [2024-07-15 21:44:56.953873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.391 [2024-07-15 21:44:56.953881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.391 [2024-07-15 21:44:56.954099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.391 [2024-07-15 21:44:56.954324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.391 [2024-07-15 21:44:56.954333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.391 [2024-07-15 21:44:56.954339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.391 [2024-07-15 21:44:56.957885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.391 [2024-07-15 21:44:56.967087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.391 [2024-07-15 21:44:56.967771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.391 [2024-07-15 21:44:56.967808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.391 [2024-07-15 21:44:56.967819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.391 [2024-07-15 21:44:56.968058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.391 [2024-07-15 21:44:56.968291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.391 [2024-07-15 21:44:56.968300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.391 [2024-07-15 21:44:56.968307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.391 [2024-07-15 21:44:56.971858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.391 [2024-07-15 21:44:56.981070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.391 [2024-07-15 21:44:56.981753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.391 [2024-07-15 21:44:56.981771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.391 [2024-07-15 21:44:56.981779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.391 [2024-07-15 21:44:56.981998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.391 [2024-07-15 21:44:56.982223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.391 [2024-07-15 21:44:56.982232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.391 [2024-07-15 21:44:56.982239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.391 [2024-07-15 21:44:56.985783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.391 [2024-07-15 21:44:56.994981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.392 [2024-07-15 21:44:56.995700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.392 [2024-07-15 21:44:56.995737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.392 [2024-07-15 21:44:56.995752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.392 [2024-07-15 21:44:56.995991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.392 [2024-07-15 21:44:56.996224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.392 [2024-07-15 21:44:56.996233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.392 [2024-07-15 21:44:56.996241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.392 [2024-07-15 21:44:56.999792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.392 [2024-07-15 21:44:57.008793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.392 [2024-07-15 21:44:57.009526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.392 [2024-07-15 21:44:57.009563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.392 [2024-07-15 21:44:57.009573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.392 [2024-07-15 21:44:57.009812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.392 [2024-07-15 21:44:57.010034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.392 [2024-07-15 21:44:57.010042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.392 [2024-07-15 21:44:57.010049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.392 [2024-07-15 21:44:57.013606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.392 [2024-07-15 21:44:57.022603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.392 [2024-07-15 21:44:57.023400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.392 [2024-07-15 21:44:57.023437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.392 [2024-07-15 21:44:57.023448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.392 [2024-07-15 21:44:57.023686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.392 [2024-07-15 21:44:57.023909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.392 [2024-07-15 21:44:57.023917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.392 [2024-07-15 21:44:57.023924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.392 [2024-07-15 21:44:57.027482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.392 [2024-07-15 21:44:57.036500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.392 [2024-07-15 21:44:57.037222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.392 [2024-07-15 21:44:57.037259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.392 [2024-07-15 21:44:57.037271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.392 [2024-07-15 21:44:57.037511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.392 [2024-07-15 21:44:57.037734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.392 [2024-07-15 21:44:57.037747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.392 [2024-07-15 21:44:57.037754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.392 [2024-07-15 21:44:57.041313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.392 [2024-07-15 21:44:57.050314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.392 [2024-07-15 21:44:57.050995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.392 [2024-07-15 21:44:57.051032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.392 [2024-07-15 21:44:57.051043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.392 [2024-07-15 21:44:57.051290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.392 [2024-07-15 21:44:57.051514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.392 [2024-07-15 21:44:57.051522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.392 [2024-07-15 21:44:57.051529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.392 [2024-07-15 21:44:57.055077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.392 [2024-07-15 21:44:57.064284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.392 [2024-07-15 21:44:57.065057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.392 [2024-07-15 21:44:57.065093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.392 [2024-07-15 21:44:57.065105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.392 [2024-07-15 21:44:57.065354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.392 [2024-07-15 21:44:57.065578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.392 [2024-07-15 21:44:57.065586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.392 [2024-07-15 21:44:57.065594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.392 [2024-07-15 21:44:57.069146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.392 [2024-07-15 21:44:57.078149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.392 [2024-07-15 21:44:57.078913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.392 [2024-07-15 21:44:57.078949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.392 [2024-07-15 21:44:57.078959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.392 [2024-07-15 21:44:57.079207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.392 [2024-07-15 21:44:57.079430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.392 [2024-07-15 21:44:57.079438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.392 [2024-07-15 21:44:57.079445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.392 [2024-07-15 21:44:57.082993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.392 [2024-07-15 21:44:57.091987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.392 [2024-07-15 21:44:57.092759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.392 [2024-07-15 21:44:57.092795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.392 [2024-07-15 21:44:57.092806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.392 [2024-07-15 21:44:57.093044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.392 [2024-07-15 21:44:57.093277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.392 [2024-07-15 21:44:57.093286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.392 [2024-07-15 21:44:57.093293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.392 [2024-07-15 21:44:57.096846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.392 [2024-07-15 21:44:57.105837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.392 [2024-07-15 21:44:57.106569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.392 [2024-07-15 21:44:57.106606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.392 [2024-07-15 21:44:57.106616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.392 [2024-07-15 21:44:57.106855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.392 [2024-07-15 21:44:57.107077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.392 [2024-07-15 21:44:57.107086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.392 [2024-07-15 21:44:57.107093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.392 [2024-07-15 21:44:57.110653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.392 [2024-07-15 21:44:57.119649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.392 [2024-07-15 21:44:57.120407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.392 [2024-07-15 21:44:57.120444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.392 [2024-07-15 21:44:57.120455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.392 [2024-07-15 21:44:57.120693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.392 [2024-07-15 21:44:57.120915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.392 [2024-07-15 21:44:57.120924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.392 [2024-07-15 21:44:57.120931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.392 [2024-07-15 21:44:57.124491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.392 [2024-07-15 21:44:57.133497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.392 [2024-07-15 21:44:57.134239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.392 [2024-07-15 21:44:57.134275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.392 [2024-07-15 21:44:57.134287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.392 [2024-07-15 21:44:57.134534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.392 [2024-07-15 21:44:57.134756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.392 [2024-07-15 21:44:57.134765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.392 [2024-07-15 21:44:57.134772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.393 [2024-07-15 21:44:57.138331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.393 [2024-07-15 21:44:57.147331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.393 [2024-07-15 21:44:57.147943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.393 [2024-07-15 21:44:57.147980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.393 [2024-07-15 21:44:57.147991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.393 [2024-07-15 21:44:57.148237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.393 [2024-07-15 21:44:57.148461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.393 [2024-07-15 21:44:57.148469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.393 [2024-07-15 21:44:57.148477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.655 [2024-07-15 21:44:57.152027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.655 [2024-07-15 21:44:57.161241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.655 [2024-07-15 21:44:57.161951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-07-15 21:44:57.161988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.655 [2024-07-15 21:44:57.161999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.655 [2024-07-15 21:44:57.162246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.655 [2024-07-15 21:44:57.162470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.655 [2024-07-15 21:44:57.162479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.655 [2024-07-15 21:44:57.162486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.655 [2024-07-15 21:44:57.166038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.655 [2024-07-15 21:44:57.175043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.655 [2024-07-15 21:44:57.175670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-07-15 21:44:57.175708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.655 [2024-07-15 21:44:57.175718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.655 [2024-07-15 21:44:57.175957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.655 [2024-07-15 21:44:57.176186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.655 [2024-07-15 21:44:57.176195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.656 [2024-07-15 21:44:57.176210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.656 [2024-07-15 21:44:57.179764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.656 [2024-07-15 21:44:57.188971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.656 [2024-07-15 21:44:57.189668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-07-15 21:44:57.189704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.656 [2024-07-15 21:44:57.189715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.656 [2024-07-15 21:44:57.189953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.656 [2024-07-15 21:44:57.190184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.656 [2024-07-15 21:44:57.190193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.656 [2024-07-15 21:44:57.190201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.656 [2024-07-15 21:44:57.193751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.656 [2024-07-15 21:44:57.202954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.656 [2024-07-15 21:44:57.203635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-07-15 21:44:57.203653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.656 [2024-07-15 21:44:57.203661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.656 [2024-07-15 21:44:57.203880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.656 [2024-07-15 21:44:57.204098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.656 [2024-07-15 21:44:57.204106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.656 [2024-07-15 21:44:57.204113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.656 [2024-07-15 21:44:57.207664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.656 [2024-07-15 21:44:57.216870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.656 [2024-07-15 21:44:57.217580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-07-15 21:44:57.217617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.656 [2024-07-15 21:44:57.217628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.656 [2024-07-15 21:44:57.217866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.656 [2024-07-15 21:44:57.218089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.656 [2024-07-15 21:44:57.218097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.656 [2024-07-15 21:44:57.218104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.656 [2024-07-15 21:44:57.221663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.656 [2024-07-15 21:44:57.230663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.656 [2024-07-15 21:44:57.231431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-07-15 21:44:57.231473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.656 [2024-07-15 21:44:57.231484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.656 [2024-07-15 21:44:57.231722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.656 [2024-07-15 21:44:57.231945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.656 [2024-07-15 21:44:57.231953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.656 [2024-07-15 21:44:57.231960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.656 [2024-07-15 21:44:57.235518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.656 [2024-07-15 21:44:57.244516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.656 [2024-07-15 21:44:57.245335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-07-15 21:44:57.245371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.656 [2024-07-15 21:44:57.245382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.656 [2024-07-15 21:44:57.245621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.656 [2024-07-15 21:44:57.245843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.656 [2024-07-15 21:44:57.245851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.656 [2024-07-15 21:44:57.245859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.656 [2024-07-15 21:44:57.249419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.656 [2024-07-15 21:44:57.258414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.656 [2024-07-15 21:44:57.259178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-07-15 21:44:57.259215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.656 [2024-07-15 21:44:57.259226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.656 [2024-07-15 21:44:57.259466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.656 [2024-07-15 21:44:57.259689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.656 [2024-07-15 21:44:57.259697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.656 [2024-07-15 21:44:57.259705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.656 [2024-07-15 21:44:57.263266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.656 [2024-07-15 21:44:57.272266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.656 [2024-07-15 21:44:57.272887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-07-15 21:44:57.272924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.656 [2024-07-15 21:44:57.272934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.656 [2024-07-15 21:44:57.273181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.656 [2024-07-15 21:44:57.273409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.656 [2024-07-15 21:44:57.273418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.656 [2024-07-15 21:44:57.273425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.656 [2024-07-15 21:44:57.276985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.656 [2024-07-15 21:44:57.286193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.656 [2024-07-15 21:44:57.286967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-07-15 21:44:57.287003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.656 [2024-07-15 21:44:57.287014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.656 [2024-07-15 21:44:57.287265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.656 [2024-07-15 21:44:57.287490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.656 [2024-07-15 21:44:57.287498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.656 [2024-07-15 21:44:57.287505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.656 [2024-07-15 21:44:57.291056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.656 [2024-07-15 21:44:57.300049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.656 [2024-07-15 21:44:57.300821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-07-15 21:44:57.300858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.656 [2024-07-15 21:44:57.300869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.656 [2024-07-15 21:44:57.301107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.656 [2024-07-15 21:44:57.301338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.656 [2024-07-15 21:44:57.301347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.656 [2024-07-15 21:44:57.301355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.656 [2024-07-15 21:44:57.304905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.656 [2024-07-15 21:44:57.313906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.656 [2024-07-15 21:44:57.314649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-07-15 21:44:57.314686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.656 [2024-07-15 21:44:57.314697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.656 [2024-07-15 21:44:57.314935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.657 [2024-07-15 21:44:57.315167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.657 [2024-07-15 21:44:57.315176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.657 [2024-07-15 21:44:57.315184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.657 [2024-07-15 21:44:57.318740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.657 [2024-07-15 21:44:57.327731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.657 [2024-07-15 21:44:57.328453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-07-15 21:44:57.328490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.657 [2024-07-15 21:44:57.328501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.657 [2024-07-15 21:44:57.328740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.657 [2024-07-15 21:44:57.328962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.657 [2024-07-15 21:44:57.328970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.657 [2024-07-15 21:44:57.328978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.657 [2024-07-15 21:44:57.332548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.657 [2024-07-15 21:44:57.341555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.657 [2024-07-15 21:44:57.342250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-07-15 21:44:57.342287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.657 [2024-07-15 21:44:57.342299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.657 [2024-07-15 21:44:57.342540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.657 [2024-07-15 21:44:57.342764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.657 [2024-07-15 21:44:57.342772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.657 [2024-07-15 21:44:57.342779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.657 [2024-07-15 21:44:57.346341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.657 [2024-07-15 21:44:57.355547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.657 [2024-07-15 21:44:57.356230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-07-15 21:44:57.356267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.657 [2024-07-15 21:44:57.356279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.657 [2024-07-15 21:44:57.356522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.657 [2024-07-15 21:44:57.356744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.657 [2024-07-15 21:44:57.356753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.657 [2024-07-15 21:44:57.356760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.657 [2024-07-15 21:44:57.360315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.657 [2024-07-15 21:44:57.369519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.657 [2024-07-15 21:44:57.370173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-07-15 21:44:57.370198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.657 [2024-07-15 21:44:57.370211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.657 [2024-07-15 21:44:57.370435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.657 [2024-07-15 21:44:57.370655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.657 [2024-07-15 21:44:57.370663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.657 [2024-07-15 21:44:57.370670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.657 [2024-07-15 21:44:57.374222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.657 [2024-07-15 21:44:57.383436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.657 [2024-07-15 21:44:57.384117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-07-15 21:44:57.384160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.657 [2024-07-15 21:44:57.384171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.657 [2024-07-15 21:44:57.384410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.657 [2024-07-15 21:44:57.384632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.657 [2024-07-15 21:44:57.384640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.657 [2024-07-15 21:44:57.384648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.657 [2024-07-15 21:44:57.388202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.657 [2024-07-15 21:44:57.397411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.657 [2024-07-15 21:44:57.398078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-07-15 21:44:57.398114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.657 [2024-07-15 21:44:57.398133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.657 [2024-07-15 21:44:57.398377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.657 [2024-07-15 21:44:57.398599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.657 [2024-07-15 21:44:57.398608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.657 [2024-07-15 21:44:57.398615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.657 [2024-07-15 21:44:57.402167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.657 [2024-07-15 21:44:57.411379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.657 [2024-07-15 21:44:57.411951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-07-15 21:44:57.411969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.657 [2024-07-15 21:44:57.411976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.657 [2024-07-15 21:44:57.412203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.657 [2024-07-15 21:44:57.412423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.657 [2024-07-15 21:44:57.412436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.657 [2024-07-15 21:44:57.412443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.657 [2024-07-15 21:44:57.415996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.657 [2024-07-15 21:44:57.425217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.657 [2024-07-15 21:44:57.425843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-07-15 21:44:57.425859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.657 [2024-07-15 21:44:57.425866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.657 [2024-07-15 21:44:57.426085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.657 [2024-07-15 21:44:57.426309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.657 [2024-07-15 21:44:57.426318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.657 [2024-07-15 21:44:57.426324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.657 [2024-07-15 21:44:57.429877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.657 [2024-07-15 21:44:57.439121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.657 [2024-07-15 21:44:57.439880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-07-15 21:44:57.439916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.657 [2024-07-15 21:44:57.439928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.657 [2024-07-15 21:44:57.440180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.657 [2024-07-15 21:44:57.440404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.657 [2024-07-15 21:44:57.440413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.657 [2024-07-15 21:44:57.440421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.657 [2024-07-15 21:44:57.443985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.657 [2024-07-15 21:44:57.452992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.657 [2024-07-15 21:44:57.453704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-07-15 21:44:57.453741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.657 [2024-07-15 21:44:57.453752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.657 [2024-07-15 21:44:57.453991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.657 [2024-07-15 21:44:57.454224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.658 [2024-07-15 21:44:57.454234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.658 [2024-07-15 21:44:57.454241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.658 [2024-07-15 21:44:57.457798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.920 [2024-07-15 21:44:57.466814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.920 [2024-07-15 21:44:57.467506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.920 [2024-07-15 21:44:57.467543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.920 [2024-07-15 21:44:57.467554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.920 [2024-07-15 21:44:57.467793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.920 [2024-07-15 21:44:57.468016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.920 [2024-07-15 21:44:57.468025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.920 [2024-07-15 21:44:57.468032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.920 [2024-07-15 21:44:57.471591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.920 [2024-07-15 21:44:57.480898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.920 [2024-07-15 21:44:57.481634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.920 [2024-07-15 21:44:57.481672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.920 [2024-07-15 21:44:57.481682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.920 [2024-07-15 21:44:57.481921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.920 [2024-07-15 21:44:57.482151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.920 [2024-07-15 21:44:57.482160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.920 [2024-07-15 21:44:57.482168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.920 [2024-07-15 21:44:57.485726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.920 [2024-07-15 21:44:57.494736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.920 [2024-07-15 21:44:57.495479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.920 [2024-07-15 21:44:57.495516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.920 [2024-07-15 21:44:57.495527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.920 [2024-07-15 21:44:57.495766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.920 [2024-07-15 21:44:57.495989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.920 [2024-07-15 21:44:57.495997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.920 [2024-07-15 21:44:57.496005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.920 [2024-07-15 21:44:57.499571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.920 [2024-07-15 21:44:57.508587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.920 [2024-07-15 21:44:57.509274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.920 [2024-07-15 21:44:57.509311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.920 [2024-07-15 21:44:57.509326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.920 [2024-07-15 21:44:57.509565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.920 [2024-07-15 21:44:57.509787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.920 [2024-07-15 21:44:57.509796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.920 [2024-07-15 21:44:57.509803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.920 [2024-07-15 21:44:57.513365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.920 [2024-07-15 21:44:57.522574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.920 [2024-07-15 21:44:57.523413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.920 [2024-07-15 21:44:57.523450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.920 [2024-07-15 21:44:57.523461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.920 [2024-07-15 21:44:57.523700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.920 [2024-07-15 21:44:57.523922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.920 [2024-07-15 21:44:57.523930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.920 [2024-07-15 21:44:57.523937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.920 [2024-07-15 21:44:57.527492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.920 [2024-07-15 21:44:57.536497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.920 [2024-07-15 21:44:57.537065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.920 [2024-07-15 21:44:57.537083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.920 [2024-07-15 21:44:57.537090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.920 [2024-07-15 21:44:57.537316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.920 [2024-07-15 21:44:57.537536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.920 [2024-07-15 21:44:57.537543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.920 [2024-07-15 21:44:57.537550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.920 [2024-07-15 21:44:57.541092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.920 [2024-07-15 21:44:57.550306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.920 [2024-07-15 21:44:57.551020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.920 [2024-07-15 21:44:57.551057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.920 [2024-07-15 21:44:57.551067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.920 [2024-07-15 21:44:57.551315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.920 [2024-07-15 21:44:57.551539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.920 [2024-07-15 21:44:57.551552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.920 [2024-07-15 21:44:57.551560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.920 [2024-07-15 21:44:57.555114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.920 [2024-07-15 21:44:57.564133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.920 [2024-07-15 21:44:57.564852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.920 [2024-07-15 21:44:57.564889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.920 [2024-07-15 21:44:57.564900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.920 [2024-07-15 21:44:57.565147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.920 [2024-07-15 21:44:57.565370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.920 [2024-07-15 21:44:57.565378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.920 [2024-07-15 21:44:57.565386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.920 [2024-07-15 21:44:57.568944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.920 [2024-07-15 21:44:57.577963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.920 [2024-07-15 21:44:57.578624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.920 [2024-07-15 21:44:57.578661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.920 [2024-07-15 21:44:57.578673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.921 [2024-07-15 21:44:57.578915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.921 [2024-07-15 21:44:57.579150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.921 [2024-07-15 21:44:57.579161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.921 [2024-07-15 21:44:57.579168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.921 [2024-07-15 21:44:57.582723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.921 [2024-07-15 21:44:57.591942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.921 [2024-07-15 21:44:57.592672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.921 [2024-07-15 21:44:57.592709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.921 [2024-07-15 21:44:57.592720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.921 [2024-07-15 21:44:57.592958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.921 [2024-07-15 21:44:57.593190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.921 [2024-07-15 21:44:57.593199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.921 [2024-07-15 21:44:57.593207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.921 [2024-07-15 21:44:57.596760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.921 [2024-07-15 21:44:57.605771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.921 [2024-07-15 21:44:57.606514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.921 [2024-07-15 21:44:57.606552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.921 [2024-07-15 21:44:57.606563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.921 [2024-07-15 21:44:57.606801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.921 [2024-07-15 21:44:57.607024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.921 [2024-07-15 21:44:57.607032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.921 [2024-07-15 21:44:57.607039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.921 [2024-07-15 21:44:57.610604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.921 [2024-07-15 21:44:57.619619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.921 [2024-07-15 21:44:57.620382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.921 [2024-07-15 21:44:57.620418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.921 [2024-07-15 21:44:57.620429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.921 [2024-07-15 21:44:57.620667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.921 [2024-07-15 21:44:57.620890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.921 [2024-07-15 21:44:57.620899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.921 [2024-07-15 21:44:57.620906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.921 [2024-07-15 21:44:57.624463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.921 [2024-07-15 21:44:57.633482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.921 [2024-07-15 21:44:57.634238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.921 [2024-07-15 21:44:57.634275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.921 [2024-07-15 21:44:57.634286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.921 [2024-07-15 21:44:57.634525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.921 [2024-07-15 21:44:57.634747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.921 [2024-07-15 21:44:57.634755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.921 [2024-07-15 21:44:57.634763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.921 [2024-07-15 21:44:57.638321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.921 [2024-07-15 21:44:57.647329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.921 [2024-07-15 21:44:57.648070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.921 [2024-07-15 21:44:57.648107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.921 [2024-07-15 21:44:57.648118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.921 [2024-07-15 21:44:57.648371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.921 [2024-07-15 21:44:57.648594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.921 [2024-07-15 21:44:57.648602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.921 [2024-07-15 21:44:57.648610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.921 [2024-07-15 21:44:57.652172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.921 [2024-07-15 21:44:57.661187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.921 [2024-07-15 21:44:57.661956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.921 [2024-07-15 21:44:57.661993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.921 [2024-07-15 21:44:57.662003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.921 [2024-07-15 21:44:57.662250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.921 [2024-07-15 21:44:57.662474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.921 [2024-07-15 21:44:57.662482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.921 [2024-07-15 21:44:57.662490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.921 [2024-07-15 21:44:57.666042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.921 [2024-07-15 21:44:57.675049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.921 [2024-07-15 21:44:57.675733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.921 [2024-07-15 21:44:57.675752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.921 [2024-07-15 21:44:57.675759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.921 [2024-07-15 21:44:57.675979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.921 [2024-07-15 21:44:57.676212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.921 [2024-07-15 21:44:57.676220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.921 [2024-07-15 21:44:57.676227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.921 [2024-07-15 21:44:57.679780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.921 [2024-07-15 21:44:57.688997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.921 [2024-07-15 21:44:57.689673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.921 [2024-07-15 21:44:57.689689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.921 [2024-07-15 21:44:57.689696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.921 [2024-07-15 21:44:57.689914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.921 [2024-07-15 21:44:57.690140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.921 [2024-07-15 21:44:57.690148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.921 [2024-07-15 21:44:57.690159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.921 [2024-07-15 21:44:57.693708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.921 [2024-07-15 21:44:57.702917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.921 [2024-07-15 21:44:57.703635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.921 [2024-07-15 21:44:57.703671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.921 [2024-07-15 21:44:57.703682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.921 [2024-07-15 21:44:57.703921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.921 [2024-07-15 21:44:57.704154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.921 [2024-07-15 21:44:57.704163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.921 [2024-07-15 21:44:57.704171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.921 [2024-07-15 21:44:57.707727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.921 [2024-07-15 21:44:57.716745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.921 [2024-07-15 21:44:57.717446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.921 [2024-07-15 21:44:57.717483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:07.921 [2024-07-15 21:44:57.717494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:07.921 [2024-07-15 21:44:57.717732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:07.921 [2024-07-15 21:44:57.717954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.921 [2024-07-15 21:44:57.717962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.921 [2024-07-15 21:44:57.717970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.922 [2024-07-15 21:44:57.721525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.183 [2024-07-15 21:44:57.730738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.183 [2024-07-15 21:44:57.731490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-07-15 21:44:57.731527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.183 [2024-07-15 21:44:57.731538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.183 [2024-07-15 21:44:57.731776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.183 [2024-07-15 21:44:57.731999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.183 [2024-07-15 21:44:57.732007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.183 [2024-07-15 21:44:57.732015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.183 [2024-07-15 21:44:57.735570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.183 [2024-07-15 21:44:57.744577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.183 [2024-07-15 21:44:57.745322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-07-15 21:44:57.745363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.183 [2024-07-15 21:44:57.745376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.183 [2024-07-15 21:44:57.745615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.183 [2024-07-15 21:44:57.745837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.183 [2024-07-15 21:44:57.745846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.183 [2024-07-15 21:44:57.745854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.184 [2024-07-15 21:44:57.749417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.184 [2024-07-15 21:44:57.758433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.184 [2024-07-15 21:44:57.759117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-07-15 21:44:57.759142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.184 [2024-07-15 21:44:57.759150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.184 [2024-07-15 21:44:57.759370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.184 [2024-07-15 21:44:57.759589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.184 [2024-07-15 21:44:57.759598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.184 [2024-07-15 21:44:57.759605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.184 [2024-07-15 21:44:57.763161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.184 [2024-07-15 21:44:57.772379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.184 [2024-07-15 21:44:57.773041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-07-15 21:44:57.773056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.184 [2024-07-15 21:44:57.773063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.184 [2024-07-15 21:44:57.773287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.184 [2024-07-15 21:44:57.773506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.184 [2024-07-15 21:44:57.773514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.184 [2024-07-15 21:44:57.773521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.184 [2024-07-15 21:44:57.777075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.184 [2024-07-15 21:44:57.786292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.184 [2024-07-15 21:44:57.786960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-07-15 21:44:57.786975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.184 [2024-07-15 21:44:57.786982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.184 [2024-07-15 21:44:57.787207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.184 [2024-07-15 21:44:57.787431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.184 [2024-07-15 21:44:57.787438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.184 [2024-07-15 21:44:57.787445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.184 [2024-07-15 21:44:57.790992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.184 [2024-07-15 21:44:57.800224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.184 [2024-07-15 21:44:57.800927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-07-15 21:44:57.800964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.184 [2024-07-15 21:44:57.800974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.184 [2024-07-15 21:44:57.801221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.184 [2024-07-15 21:44:57.801445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.184 [2024-07-15 21:44:57.801453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.184 [2024-07-15 21:44:57.801461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.184 [2024-07-15 21:44:57.805014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.184 [2024-07-15 21:44:57.814032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.184 [2024-07-15 21:44:57.814795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-07-15 21:44:57.814832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.184 [2024-07-15 21:44:57.814842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.184 [2024-07-15 21:44:57.815081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.184 [2024-07-15 21:44:57.815314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.184 [2024-07-15 21:44:57.815324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.184 [2024-07-15 21:44:57.815331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.184 [2024-07-15 21:44:57.818977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.184 [2024-07-15 21:44:57.827996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.184 [2024-07-15 21:44:57.828774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-07-15 21:44:57.828811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.184 [2024-07-15 21:44:57.828823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.184 [2024-07-15 21:44:57.829066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.184 [2024-07-15 21:44:57.829297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.184 [2024-07-15 21:44:57.829306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.184 [2024-07-15 21:44:57.829313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.184 [2024-07-15 21:44:57.832881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.184 [2024-07-15 21:44:57.841896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.184 [2024-07-15 21:44:57.842669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-07-15 21:44:57.842706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.184 [2024-07-15 21:44:57.842717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.184 [2024-07-15 21:44:57.842956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.184 [2024-07-15 21:44:57.843190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.184 [2024-07-15 21:44:57.843199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.184 [2024-07-15 21:44:57.843207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.184 [2024-07-15 21:44:57.846760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.184 [2024-07-15 21:44:57.855772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.184 [2024-07-15 21:44:57.856521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-07-15 21:44:57.856558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.184 [2024-07-15 21:44:57.856569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.184 [2024-07-15 21:44:57.856807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.184 [2024-07-15 21:44:57.857030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.184 [2024-07-15 21:44:57.857038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.184 [2024-07-15 21:44:57.857046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.184 [2024-07-15 21:44:57.860604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.184 [2024-07-15 21:44:57.869640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.184 [2024-07-15 21:44:57.870308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-07-15 21:44:57.870327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.184 [2024-07-15 21:44:57.870335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.184 [2024-07-15 21:44:57.870554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.184 [2024-07-15 21:44:57.870774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.184 [2024-07-15 21:44:57.870782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.184 [2024-07-15 21:44:57.870789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.184 [2024-07-15 21:44:57.874337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.184 [2024-07-15 21:44:57.883561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.184 [2024-07-15 21:44:57.884243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-07-15 21:44:57.884280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.184 [2024-07-15 21:44:57.884299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.184 [2024-07-15 21:44:57.884539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.184 [2024-07-15 21:44:57.884762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.184 [2024-07-15 21:44:57.884771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.184 [2024-07-15 21:44:57.884779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.184 [2024-07-15 21:44:57.888338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.184 [2024-07-15 21:44:57.897547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.184 [2024-07-15 21:44:57.898328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-07-15 21:44:57.898364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.184 [2024-07-15 21:44:57.898376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.184 [2024-07-15 21:44:57.898614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.184 [2024-07-15 21:44:57.898837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.185 [2024-07-15 21:44:57.898845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.185 [2024-07-15 21:44:57.898853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.185 [2024-07-15 21:44:57.902416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.185 [2024-07-15 21:44:57.911417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.185 [2024-07-15 21:44:57.912053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-07-15 21:44:57.912072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.185 [2024-07-15 21:44:57.912080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.185 [2024-07-15 21:44:57.912304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.185 [2024-07-15 21:44:57.912524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.185 [2024-07-15 21:44:57.912532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.185 [2024-07-15 21:44:57.912539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.185 [2024-07-15 21:44:57.916081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.185 [2024-07-15 21:44:57.925288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.185 [2024-07-15 21:44:57.925840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-07-15 21:44:57.925855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.185 [2024-07-15 21:44:57.925862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.185 [2024-07-15 21:44:57.926080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.185 [2024-07-15 21:44:57.926304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.185 [2024-07-15 21:44:57.926316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.185 [2024-07-15 21:44:57.926323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.185 [2024-07-15 21:44:57.929903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.185 [2024-07-15 21:44:57.939114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.185 [2024-07-15 21:44:57.939785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-07-15 21:44:57.939801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.185 [2024-07-15 21:44:57.939808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.185 [2024-07-15 21:44:57.940027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.185 [2024-07-15 21:44:57.940250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.185 [2024-07-15 21:44:57.940258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.185 [2024-07-15 21:44:57.940264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.185 [2024-07-15 21:44:57.943807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.185 [2024-07-15 21:44:57.953008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.185 [2024-07-15 21:44:57.953646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-07-15 21:44:57.953683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.185 [2024-07-15 21:44:57.953694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.185 [2024-07-15 21:44:57.953933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.185 [2024-07-15 21:44:57.954164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.185 [2024-07-15 21:44:57.954173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.185 [2024-07-15 21:44:57.954180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.185 [2024-07-15 21:44:57.957729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.185 [2024-07-15 21:44:57.966937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.185 [2024-07-15 21:44:57.967553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-07-15 21:44:57.967572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.185 [2024-07-15 21:44:57.967580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.185 [2024-07-15 21:44:57.967799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.185 [2024-07-15 21:44:57.968019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.185 [2024-07-15 21:44:57.968027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.185 [2024-07-15 21:44:57.968034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.185 [2024-07-15 21:44:57.971583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.185 [2024-07-15 21:44:57.980809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.185 [2024-07-15 21:44:57.981505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-07-15 21:44:57.981543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.185 [2024-07-15 21:44:57.981553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.185 [2024-07-15 21:44:57.981792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.185 [2024-07-15 21:44:57.982014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.185 [2024-07-15 21:44:57.982023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.185 [2024-07-15 21:44:57.982031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.185 [2024-07-15 21:44:57.985589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.447 [2024-07-15 21:44:57.994796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.447 [2024-07-15 21:44:57.995425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.447 [2024-07-15 21:44:57.995444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.447 [2024-07-15 21:44:57.995451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.447 [2024-07-15 21:44:57.995671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.447 [2024-07-15 21:44:57.995889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.447 [2024-07-15 21:44:57.995897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.447 [2024-07-15 21:44:57.995904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.447 [2024-07-15 21:44:57.999450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.447 [2024-07-15 21:44:58.008654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.447 [2024-07-15 21:44:58.009313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.447 [2024-07-15 21:44:58.009350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.447 [2024-07-15 21:44:58.009361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.447 [2024-07-15 21:44:58.009600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.447 [2024-07-15 21:44:58.009822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.447 [2024-07-15 21:44:58.009831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.447 [2024-07-15 21:44:58.009839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.447 [2024-07-15 21:44:58.013396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.447 [2024-07-15 21:44:58.022610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.447 [2024-07-15 21:44:58.023387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.447 [2024-07-15 21:44:58.023424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.447 [2024-07-15 21:44:58.023434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.447 [2024-07-15 21:44:58.023678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.447 [2024-07-15 21:44:58.023901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.447 [2024-07-15 21:44:58.023909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.447 [2024-07-15 21:44:58.023916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.447 [2024-07-15 21:44:58.027471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.447 [2024-07-15 21:44:58.036470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.447 [2024-07-15 21:44:58.037226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.447 [2024-07-15 21:44:58.037263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.448 [2024-07-15 21:44:58.037275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.448 [2024-07-15 21:44:58.037515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.448 [2024-07-15 21:44:58.037738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.448 [2024-07-15 21:44:58.037747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.448 [2024-07-15 21:44:58.037755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.448 [2024-07-15 21:44:58.041317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.448 [2024-07-15 21:44:58.050318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.448 [2024-07-15 21:44:58.051096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.448 [2024-07-15 21:44:58.051139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.448 [2024-07-15 21:44:58.051151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.448 [2024-07-15 21:44:58.051389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.448 [2024-07-15 21:44:58.051612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.448 [2024-07-15 21:44:58.051620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.448 [2024-07-15 21:44:58.051627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.448 [2024-07-15 21:44:58.055183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.448 [2024-07-15 21:44:58.064222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.448 [2024-07-15 21:44:58.064859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.448 [2024-07-15 21:44:58.064876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.448 [2024-07-15 21:44:58.064884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.448 [2024-07-15 21:44:58.065104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.448 [2024-07-15 21:44:58.065329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.448 [2024-07-15 21:44:58.065337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.448 [2024-07-15 21:44:58.065348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.448 [2024-07-15 21:44:58.068895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.448 [2024-07-15 21:44:58.078114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.448 [2024-07-15 21:44:58.078836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.448 [2024-07-15 21:44:58.078873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.448 [2024-07-15 21:44:58.078883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.448 [2024-07-15 21:44:58.079131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.448 [2024-07-15 21:44:58.079354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.448 [2024-07-15 21:44:58.079363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.448 [2024-07-15 21:44:58.079370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.448 [2024-07-15 21:44:58.082921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.448 [2024-07-15 21:44:58.091922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.448 [2024-07-15 21:44:58.092595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.448 [2024-07-15 21:44:58.092614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.448 [2024-07-15 21:44:58.092621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.448 [2024-07-15 21:44:58.092841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.448 [2024-07-15 21:44:58.093060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.448 [2024-07-15 21:44:58.093067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.448 [2024-07-15 21:44:58.093074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.448 [2024-07-15 21:44:58.096624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.448 [2024-07-15 21:44:58.105821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.448 [2024-07-15 21:44:58.106573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.448 [2024-07-15 21:44:58.106610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.448 [2024-07-15 21:44:58.106620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.448 [2024-07-15 21:44:58.106859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.448 [2024-07-15 21:44:58.107082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.448 [2024-07-15 21:44:58.107090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.448 [2024-07-15 21:44:58.107098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.448 [2024-07-15 21:44:58.110652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.448 [2024-07-15 21:44:58.119651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.448 [2024-07-15 21:44:58.120301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.448 [2024-07-15 21:44:58.120338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.448 [2024-07-15 21:44:58.120348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.448 [2024-07-15 21:44:58.120587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.448 [2024-07-15 21:44:58.120809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.448 [2024-07-15 21:44:58.120817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.448 [2024-07-15 21:44:58.120824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.448 [2024-07-15 21:44:58.124382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.448 [2024-07-15 21:44:58.133592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.448 [2024-07-15 21:44:58.134304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.448 [2024-07-15 21:44:58.134340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.448 [2024-07-15 21:44:58.134351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.448 [2024-07-15 21:44:58.134590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.448 [2024-07-15 21:44:58.134812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.448 [2024-07-15 21:44:58.134820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.448 [2024-07-15 21:44:58.134828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.448 [2024-07-15 21:44:58.138388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.448 [2024-07-15 21:44:58.147587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.448 [2024-07-15 21:44:58.148366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.448 [2024-07-15 21:44:58.148403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.448 [2024-07-15 21:44:58.148413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.448 [2024-07-15 21:44:58.148652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.448 [2024-07-15 21:44:58.148875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.448 [2024-07-15 21:44:58.148884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.448 [2024-07-15 21:44:58.148891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.448 [2024-07-15 21:44:58.152450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.448 [2024-07-15 21:44:58.161443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.448 [2024-07-15 21:44:58.162199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.448 [2024-07-15 21:44:58.162236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.448 [2024-07-15 21:44:58.162248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.448 [2024-07-15 21:44:58.162494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.448 [2024-07-15 21:44:58.162717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.448 [2024-07-15 21:44:58.162725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.448 [2024-07-15 21:44:58.162733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.448 [2024-07-15 21:44:58.166292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.448 [2024-07-15 21:44:58.175282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.448 [2024-07-15 21:44:58.176058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.448 [2024-07-15 21:44:58.176094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.448 [2024-07-15 21:44:58.176105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.448 [2024-07-15 21:44:58.176352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.448 [2024-07-15 21:44:58.176575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.448 [2024-07-15 21:44:58.176583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.448 [2024-07-15 21:44:58.176591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.448 [2024-07-15 21:44:58.180158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.448 [2024-07-15 21:44:58.189213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.449 [2024-07-15 21:44:58.189988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.449 [2024-07-15 21:44:58.190025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.449 [2024-07-15 21:44:58.190035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.449 [2024-07-15 21:44:58.190282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.449 [2024-07-15 21:44:58.190506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.449 [2024-07-15 21:44:58.190514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.449 [2024-07-15 21:44:58.190522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.449 [2024-07-15 21:44:58.194074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.449 [2024-07-15 21:44:58.203084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.449 [2024-07-15 21:44:58.203854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.449 [2024-07-15 21:44:58.203891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.449 [2024-07-15 21:44:58.203902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.449 [2024-07-15 21:44:58.204149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.449 [2024-07-15 21:44:58.204372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.449 [2024-07-15 21:44:58.204381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.449 [2024-07-15 21:44:58.204392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.449 [2024-07-15 21:44:58.207944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.449 [2024-07-15 21:44:58.216951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.449 [2024-07-15 21:44:58.217683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.449 [2024-07-15 21:44:58.217720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.449 [2024-07-15 21:44:58.217731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.449 [2024-07-15 21:44:58.217970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.449 [2024-07-15 21:44:58.218200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.449 [2024-07-15 21:44:58.218209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.449 [2024-07-15 21:44:58.218216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.449 [2024-07-15 21:44:58.221769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.449 [2024-07-15 21:44:58.230775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.449 [2024-07-15 21:44:58.231421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.449 [2024-07-15 21:44:58.231438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.449 [2024-07-15 21:44:58.231446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.449 [2024-07-15 21:44:58.231665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.449 [2024-07-15 21:44:58.231884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.449 [2024-07-15 21:44:58.231892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.449 [2024-07-15 21:44:58.231898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.449 [2024-07-15 21:44:58.235449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.449 [2024-07-15 21:44:58.244650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.449 [2024-07-15 21:44:58.245395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.449 [2024-07-15 21:44:58.245432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.449 [2024-07-15 21:44:58.245443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.449 [2024-07-15 21:44:58.245681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.449 [2024-07-15 21:44:58.245903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.449 [2024-07-15 21:44:58.245911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.449 [2024-07-15 21:44:58.245919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.449 [2024-07-15 21:44:58.249478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.711 [2024-07-15 21:44:58.258479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.711 [2024-07-15 21:44:58.259239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.711 [2024-07-15 21:44:58.259280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.711 [2024-07-15 21:44:58.259291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.711 [2024-07-15 21:44:58.259530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.711 [2024-07-15 21:44:58.259752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.711 [2024-07-15 21:44:58.259760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.711 [2024-07-15 21:44:58.259767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.711 [2024-07-15 21:44:58.263334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.711 [2024-07-15 21:44:58.272342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.711 [2024-07-15 21:44:58.273110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.711 [2024-07-15 21:44:58.273154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.711 [2024-07-15 21:44:58.273165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.711 [2024-07-15 21:44:58.273404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.711 [2024-07-15 21:44:58.273627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.711 [2024-07-15 21:44:58.273635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.711 [2024-07-15 21:44:58.273642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.711 [2024-07-15 21:44:58.277204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.711 [2024-07-15 21:44:58.286207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.711 [2024-07-15 21:44:58.286930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.711 [2024-07-15 21:44:58.286966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.711 [2024-07-15 21:44:58.286977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.711 [2024-07-15 21:44:58.287225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.711 [2024-07-15 21:44:58.287449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.711 [2024-07-15 21:44:58.287457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.711 [2024-07-15 21:44:58.287464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.711 [2024-07-15 21:44:58.291017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.711 [2024-07-15 21:44:58.300018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.711 [2024-07-15 21:44:58.300662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.711 [2024-07-15 21:44:58.300699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.711 [2024-07-15 21:44:58.300710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.711 [2024-07-15 21:44:58.300948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.711 [2024-07-15 21:44:58.301184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.711 [2024-07-15 21:44:58.301193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.711 [2024-07-15 21:44:58.301201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.711 [2024-07-15 21:44:58.304752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.711 [2024-07-15 21:44:58.313963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.711 [2024-07-15 21:44:58.314686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.711 [2024-07-15 21:44:58.314723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.711 [2024-07-15 21:44:58.314733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.711 [2024-07-15 21:44:58.314972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.711 [2024-07-15 21:44:58.315204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.711 [2024-07-15 21:44:58.315213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.711 [2024-07-15 21:44:58.315220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.711 [2024-07-15 21:44:58.318770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.711 [2024-07-15 21:44:58.327759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.711 [2024-07-15 21:44:58.328530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.711 [2024-07-15 21:44:58.328567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.711 [2024-07-15 21:44:58.328577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.711 [2024-07-15 21:44:58.328816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.711 [2024-07-15 21:44:58.329039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.711 [2024-07-15 21:44:58.329047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.711 [2024-07-15 21:44:58.329054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.711 [2024-07-15 21:44:58.332627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.711 [2024-07-15 21:44:58.341621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.711 [2024-07-15 21:44:58.342289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.711 [2024-07-15 21:44:58.342308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.711 [2024-07-15 21:44:58.342315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.711 [2024-07-15 21:44:58.342535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.711 [2024-07-15 21:44:58.342754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.711 [2024-07-15 21:44:58.342762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.711 [2024-07-15 21:44:58.342769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.711 [2024-07-15 21:44:58.346323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.711 [2024-07-15 21:44:58.355520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.711 [2024-07-15 21:44:58.356282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.711 [2024-07-15 21:44:58.356319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.711 [2024-07-15 21:44:58.356329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.711 [2024-07-15 21:44:58.356568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.711 [2024-07-15 21:44:58.356791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.711 [2024-07-15 21:44:58.356799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.711 [2024-07-15 21:44:58.356806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.711 [2024-07-15 21:44:58.360363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.711 [2024-07-15 21:44:58.369356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.711 [2024-07-15 21:44:58.370118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.711 [2024-07-15 21:44:58.370161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.711 [2024-07-15 21:44:58.370172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.711 [2024-07-15 21:44:58.370410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.712 [2024-07-15 21:44:58.370633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.712 [2024-07-15 21:44:58.370641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.712 [2024-07-15 21:44:58.370648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.712 [2024-07-15 21:44:58.374202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.712 [2024-07-15 21:44:58.383205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.712 [2024-07-15 21:44:58.383946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.712 [2024-07-15 21:44:58.383983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.712 [2024-07-15 21:44:58.383994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.712 [2024-07-15 21:44:58.384241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.712 [2024-07-15 21:44:58.384465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.712 [2024-07-15 21:44:58.384473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.712 [2024-07-15 21:44:58.384480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.712 [2024-07-15 21:44:58.388028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.712 [2024-07-15 21:44:58.397029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.712 [2024-07-15 21:44:58.397799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.712 [2024-07-15 21:44:58.397836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.712 [2024-07-15 21:44:58.397850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.712 [2024-07-15 21:44:58.398090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.712 [2024-07-15 21:44:58.398321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.712 [2024-07-15 21:44:58.398330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.712 [2024-07-15 21:44:58.398337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.712 [2024-07-15 21:44:58.401888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.712 [2024-07-15 21:44:58.410894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.712 [2024-07-15 21:44:58.411605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.712 [2024-07-15 21:44:58.411642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.712 [2024-07-15 21:44:58.411652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.712 [2024-07-15 21:44:58.411891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.712 [2024-07-15 21:44:58.412114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.712 [2024-07-15 21:44:58.412133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.712 [2024-07-15 21:44:58.412141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.712 [2024-07-15 21:44:58.415693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.712 [2024-07-15 21:44:58.424699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.712 [2024-07-15 21:44:58.425467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.712 [2024-07-15 21:44:58.425504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.712 [2024-07-15 21:44:58.425515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.712 [2024-07-15 21:44:58.425754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.712 [2024-07-15 21:44:58.425977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.712 [2024-07-15 21:44:58.425985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.712 [2024-07-15 21:44:58.425992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.712 [2024-07-15 21:44:58.429553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.712 [2024-07-15 21:44:58.438558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.712 [2024-07-15 21:44:58.439405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.712 [2024-07-15 21:44:58.439442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.712 [2024-07-15 21:44:58.439452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.712 [2024-07-15 21:44:58.439691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.712 [2024-07-15 21:44:58.439914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.712 [2024-07-15 21:44:58.439926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.712 [2024-07-15 21:44:58.439934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.712 [2024-07-15 21:44:58.443492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.712 [2024-07-15 21:44:58.452490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.712 [2024-07-15 21:44:58.453226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.712 [2024-07-15 21:44:58.453263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.712 [2024-07-15 21:44:58.453275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.712 [2024-07-15 21:44:58.453517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.712 [2024-07-15 21:44:58.453740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.712 [2024-07-15 21:44:58.453750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.712 [2024-07-15 21:44:58.453757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.712 [2024-07-15 21:44:58.457315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.712 [2024-07-15 21:44:58.466322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.712 [2024-07-15 21:44:58.467092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.712 [2024-07-15 21:44:58.467138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.712 [2024-07-15 21:44:58.467151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.712 [2024-07-15 21:44:58.467392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.712 [2024-07-15 21:44:58.467615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.712 [2024-07-15 21:44:58.467623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.712 [2024-07-15 21:44:58.467631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.712 [2024-07-15 21:44:58.471191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.712 [2024-07-15 21:44:58.480210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.712 [2024-07-15 21:44:58.480857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.712 [2024-07-15 21:44:58.480875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.712 [2024-07-15 21:44:58.480883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.712 [2024-07-15 21:44:58.481102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.712 [2024-07-15 21:44:58.481329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.712 [2024-07-15 21:44:58.481337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.712 [2024-07-15 21:44:58.481344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.712 [2024-07-15 21:44:58.484892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.712 [2024-07-15 21:44:58.494110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.712 [2024-07-15 21:44:58.494747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.712 [2024-07-15 21:44:58.494762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.712 [2024-07-15 21:44:58.494769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.712 [2024-07-15 21:44:58.494988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.712 [2024-07-15 21:44:58.495212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.712 [2024-07-15 21:44:58.495220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.712 [2024-07-15 21:44:58.495227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.712 [2024-07-15 21:44:58.498777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.712 [2024-07-15 21:44:58.508020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.712 [2024-07-15 21:44:58.508649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.712 [2024-07-15 21:44:58.508664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.712 [2024-07-15 21:44:58.508671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.712 [2024-07-15 21:44:58.508891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.712 [2024-07-15 21:44:58.509109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.712 [2024-07-15 21:44:58.509117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.712 [2024-07-15 21:44:58.509133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.712 [2024-07-15 21:44:58.512684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.974 [2024-07-15 21:44:58.521904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.974 [2024-07-15 21:44:58.522605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.974 [2024-07-15 21:44:58.522642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.974 [2024-07-15 21:44:58.522653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.974 [2024-07-15 21:44:58.522892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.974 [2024-07-15 21:44:58.523114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.974 [2024-07-15 21:44:58.523130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.974 [2024-07-15 21:44:58.523139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.974 [2024-07-15 21:44:58.526689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.974 [2024-07-15 21:44:58.535902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.974 [2024-07-15 21:44:58.536671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.974 [2024-07-15 21:44:58.536708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.974 [2024-07-15 21:44:58.536718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.974 [2024-07-15 21:44:58.536961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.974 [2024-07-15 21:44:58.537192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.974 [2024-07-15 21:44:58.537202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.974 [2024-07-15 21:44:58.537209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.974 [2024-07-15 21:44:58.540757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.974 [2024-07-15 21:44:58.549748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.974 [2024-07-15 21:44:58.550425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.974 [2024-07-15 21:44:58.550444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.974 [2024-07-15 21:44:58.550451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.974 [2024-07-15 21:44:58.550671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.974 [2024-07-15 21:44:58.550889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.974 [2024-07-15 21:44:58.550897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.974 [2024-07-15 21:44:58.550904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.974 [2024-07-15 21:44:58.554451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.974 [2024-07-15 21:44:58.563651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.974 [2024-07-15 21:44:58.564278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.974 [2024-07-15 21:44:58.564293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.974 [2024-07-15 21:44:58.564300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.974 [2024-07-15 21:44:58.564519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.974 [2024-07-15 21:44:58.564737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.974 [2024-07-15 21:44:58.564745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.974 [2024-07-15 21:44:58.564751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.974 [2024-07-15 21:44:58.568355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.974 [2024-07-15 21:44:58.577559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.974 [2024-07-15 21:44:58.578223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.974 [2024-07-15 21:44:58.578260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.974 [2024-07-15 21:44:58.578272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.974 [2024-07-15 21:44:58.578513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.974 [2024-07-15 21:44:58.578736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.974 [2024-07-15 21:44:58.578744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.975 [2024-07-15 21:44:58.578761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.975 [2024-07-15 21:44:58.582321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.975 [2024-07-15 21:44:58.591530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.975 [2024-07-15 21:44:58.592203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.975 [2024-07-15 21:44:58.592239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.975 [2024-07-15 21:44:58.592250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.975 [2024-07-15 21:44:58.592488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.975 [2024-07-15 21:44:58.592711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.975 [2024-07-15 21:44:58.592719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.975 [2024-07-15 21:44:58.592726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.975 [2024-07-15 21:44:58.596288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.975 [2024-07-15 21:44:58.605494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.975 [2024-07-15 21:44:58.606231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.975 [2024-07-15 21:44:58.606267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.975 [2024-07-15 21:44:58.606280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.975 [2024-07-15 21:44:58.606521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.975 [2024-07-15 21:44:58.606744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.975 [2024-07-15 21:44:58.606752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.975 [2024-07-15 21:44:58.606759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.975 [2024-07-15 21:44:58.610320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.975 [2024-07-15 21:44:58.619310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.975 [2024-07-15 21:44:58.620069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.975 [2024-07-15 21:44:58.620106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.975 [2024-07-15 21:44:58.620118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.975 [2024-07-15 21:44:58.620366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.975 [2024-07-15 21:44:58.620589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.975 [2024-07-15 21:44:58.620597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.975 [2024-07-15 21:44:58.620604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.975 [2024-07-15 21:44:58.624157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.975 [2024-07-15 21:44:58.633162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.975 [2024-07-15 21:44:58.633919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.975 [2024-07-15 21:44:58.633956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.975 [2024-07-15 21:44:58.633966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.975 [2024-07-15 21:44:58.634212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.975 [2024-07-15 21:44:58.634436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.975 [2024-07-15 21:44:58.634444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.975 [2024-07-15 21:44:58.634451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.975 [2024-07-15 21:44:58.638001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.975 [2024-07-15 21:44:58.646998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.975 [2024-07-15 21:44:58.647716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.975 [2024-07-15 21:44:58.647753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.975 [2024-07-15 21:44:58.647764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.975 [2024-07-15 21:44:58.648002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.975 [2024-07-15 21:44:58.648234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.975 [2024-07-15 21:44:58.648243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.975 [2024-07-15 21:44:58.648251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.975 [2024-07-15 21:44:58.651803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.975 [2024-07-15 21:44:58.660799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.975 [2024-07-15 21:44:58.661534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.975 [2024-07-15 21:44:58.661571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.975 [2024-07-15 21:44:58.661582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.975 [2024-07-15 21:44:58.661820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.975 [2024-07-15 21:44:58.662043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.975 [2024-07-15 21:44:58.662051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.975 [2024-07-15 21:44:58.662059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.975 [2024-07-15 21:44:58.665618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.975 [2024-07-15 21:44:58.674625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.975 [2024-07-15 21:44:58.675373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.975 [2024-07-15 21:44:58.675410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.975 [2024-07-15 21:44:58.675421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.975 [2024-07-15 21:44:58.675660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.975 [2024-07-15 21:44:58.675887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.975 [2024-07-15 21:44:58.675895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.975 [2024-07-15 21:44:58.675903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.975 [2024-07-15 21:44:58.679474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.975 [2024-07-15 21:44:58.688472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.975 [2024-07-15 21:44:58.689222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.975 [2024-07-15 21:44:58.689259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.975 [2024-07-15 21:44:58.689271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.975 [2024-07-15 21:44:58.689511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.975 [2024-07-15 21:44:58.689733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.975 [2024-07-15 21:44:58.689742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.975 [2024-07-15 21:44:58.689749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.975 [2024-07-15 21:44:58.693308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.975 [2024-07-15 21:44:58.702306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.975 [2024-07-15 21:44:58.703071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.975 [2024-07-15 21:44:58.703108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.975 [2024-07-15 21:44:58.703119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.975 [2024-07-15 21:44:58.703367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.975 [2024-07-15 21:44:58.703590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.975 [2024-07-15 21:44:58.703598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.975 [2024-07-15 21:44:58.703605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.975 [2024-07-15 21:44:58.707158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.975 [2024-07-15 21:44:58.716157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.975 [2024-07-15 21:44:58.716917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.975 [2024-07-15 21:44:58.716954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.975 [2024-07-15 21:44:58.716964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.975 [2024-07-15 21:44:58.717211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.976 [2024-07-15 21:44:58.717435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.976 [2024-07-15 21:44:58.717444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.976 [2024-07-15 21:44:58.717451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.976 [2024-07-15 21:44:58.721007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.976 [2024-07-15 21:44:58.730000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.976 [2024-07-15 21:44:58.730731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.976 [2024-07-15 21:44:58.730768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.976 [2024-07-15 21:44:58.730778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.976 [2024-07-15 21:44:58.731017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.976 [2024-07-15 21:44:58.731248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.976 [2024-07-15 21:44:58.731257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.976 [2024-07-15 21:44:58.731265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.976 [2024-07-15 21:44:58.734817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2359189 Killed "${NVMF_APP[@]}" "$@" 00:29:08.976 21:44:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:08.976 21:44:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:08.976 21:44:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:08.976 21:44:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:08.976 21:44:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.976 [2024-07-15 21:44:58.743813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.976 [2024-07-15 21:44:58.744533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.976 [2024-07-15 21:44:58.744570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.976 [2024-07-15 21:44:58.744581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.976 [2024-07-15 21:44:58.744819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.976 [2024-07-15 21:44:58.745042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.976 [2024-07-15 21:44:58.745050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.976 [2024-07-15 21:44:58.745058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.976 21:44:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2360892 00:29:08.976 21:44:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2360892 00:29:08.976 21:44:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:08.976 21:44:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2360892 ']' 00:29:08.976 21:44:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.976 21:44:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:08.976 21:44:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.976 21:44:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:08.976 21:44:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.976 [2024-07-15 21:44:58.748616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.976 [2024-07-15 21:44:58.757624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.976 [2024-07-15 21:44:58.758232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.976 [2024-07-15 21:44:58.758270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.976 [2024-07-15 21:44:58.758282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.976 [2024-07-15 21:44:58.758526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.976 [2024-07-15 21:44:58.758749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.976 [2024-07-15 21:44:58.758758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.976 [2024-07-15 21:44:58.758765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.976 [2024-07-15 21:44:58.762324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.976 [2024-07-15 21:44:58.771535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.976 [2024-07-15 21:44:58.772352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.976 [2024-07-15 21:44:58.772389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:08.976 [2024-07-15 21:44:58.772400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:08.976 [2024-07-15 21:44:58.772639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:08.976 [2024-07-15 21:44:58.772862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.976 [2024-07-15 21:44:58.772870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.976 [2024-07-15 21:44:58.772878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.976 [2024-07-15 21:44:58.776436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.238 [2024-07-15 21:44:58.785449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.238 [2024-07-15 21:44:58.786088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.238 [2024-07-15 21:44:58.786106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.238 [2024-07-15 21:44:58.786114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.238 [2024-07-15 21:44:58.786338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.238 [2024-07-15 21:44:58.786558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.238 [2024-07-15 21:44:58.786566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.238 [2024-07-15 21:44:58.786573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.239 [2024-07-15 21:44:58.790119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.239 [2024-07-15 21:44:58.798309] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:29:09.239 [2024-07-15 21:44:58.798364] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.239 [2024-07-15 21:44:58.799334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.239 [2024-07-15 21:44:58.799961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-07-15 21:44:58.799977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.239 [2024-07-15 21:44:58.799985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.239 [2024-07-15 21:44:58.800210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.239 [2024-07-15 21:44:58.800429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.239 [2024-07-15 21:44:58.800438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.239 [2024-07-15 21:44:58.800444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.239 [2024-07-15 21:44:58.803987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.239 [2024-07-15 21:44:58.813193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.239 [2024-07-15 21:44:58.813950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-07-15 21:44:58.813987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.239 [2024-07-15 21:44:58.813998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.239 [2024-07-15 21:44:58.814244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.239 [2024-07-15 21:44:58.814468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.239 [2024-07-15 21:44:58.814476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.239 [2024-07-15 21:44:58.814484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.239 [2024-07-15 21:44:58.818038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.239 [2024-07-15 21:44:58.827042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.239 [2024-07-15 21:44:58.827799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-07-15 21:44:58.827836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.239 [2024-07-15 21:44:58.827848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.239 [2024-07-15 21:44:58.828087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.239 [2024-07-15 21:44:58.828319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.239 [2024-07-15 21:44:58.828328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.239 [2024-07-15 21:44:58.828336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.239 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.239 [2024-07-15 21:44:58.831890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.239 [2024-07-15 21:44:58.840895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.239 [2024-07-15 21:44:58.841288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-07-15 21:44:58.841306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.239 [2024-07-15 21:44:58.841319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.239 [2024-07-15 21:44:58.841538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.239 [2024-07-15 21:44:58.841757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.239 [2024-07-15 21:44:58.841765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.239 [2024-07-15 21:44:58.841772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.239 [2024-07-15 21:44:58.845323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.239 [2024-07-15 21:44:58.854733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.239 [2024-07-15 21:44:58.855458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-07-15 21:44:58.855495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.239 [2024-07-15 21:44:58.855506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.239 [2024-07-15 21:44:58.855745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.239 [2024-07-15 21:44:58.855968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.239 [2024-07-15 21:44:58.855976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.239 [2024-07-15 21:44:58.855984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.239 [2024-07-15 21:44:58.859631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.239 [2024-07-15 21:44:58.868635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.239 [2024-07-15 21:44:58.869417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.239 [2024-07-15 21:44:58.869455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.240 [2024-07-15 21:44:58.869465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.240 [2024-07-15 21:44:58.869704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.240 [2024-07-15 21:44:58.869926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.240 [2024-07-15 21:44:58.869934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.240 [2024-07-15 21:44:58.869942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.240 [2024-07-15 21:44:58.873502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.240 [2024-07-15 21:44:58.880466] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:09.240 [2024-07-15 21:44:58.882512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.240 [2024-07-15 21:44:58.883228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-07-15 21:44:58.883266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.240 [2024-07-15 21:44:58.883277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.240 [2024-07-15 21:44:58.883518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.240 [2024-07-15 21:44:58.883746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.240 [2024-07-15 21:44:58.883755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.240 [2024-07-15 21:44:58.883763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.240 [2024-07-15 21:44:58.887327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.240 [2024-07-15 21:44:58.896331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.240 [2024-07-15 21:44:58.897066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-07-15 21:44:58.897103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.240 [2024-07-15 21:44:58.897115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.240 [2024-07-15 21:44:58.897364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.240 [2024-07-15 21:44:58.897588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.240 [2024-07-15 21:44:58.897596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.240 [2024-07-15 21:44:58.897604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.240 [2024-07-15 21:44:58.901161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.240 [2024-07-15 21:44:58.910159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.240 [2024-07-15 21:44:58.910944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-07-15 21:44:58.910982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.240 [2024-07-15 21:44:58.910992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.240 [2024-07-15 21:44:58.911239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.240 [2024-07-15 21:44:58.911462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.240 [2024-07-15 21:44:58.911470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.240 [2024-07-15 21:44:58.911479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.240 [2024-07-15 21:44:58.915030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.240 [2024-07-15 21:44:58.924031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.240 [2024-07-15 21:44:58.924615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-07-15 21:44:58.924651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.240 [2024-07-15 21:44:58.924663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.240 [2024-07-15 21:44:58.924902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.240 [2024-07-15 21:44:58.925133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.240 [2024-07-15 21:44:58.925142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.240 [2024-07-15 21:44:58.925150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.240 [2024-07-15 21:44:58.928702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.240 [2024-07-15 21:44:58.933682] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.240 [2024-07-15 21:44:58.933706] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.240 [2024-07-15 21:44:58.933712] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.240 [2024-07-15 21:44:58.933717] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.240 [2024-07-15 21:44:58.933721] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.240 [2024-07-15 21:44:58.933852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:09.240 [2024-07-15 21:44:58.934000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.240 [2024-07-15 21:44:58.934003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:09.240 [2024-07-15 21:44:58.937922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.240 [2024-07-15 21:44:58.938711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-07-15 21:44:58.938749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.240 [2024-07-15 21:44:58.938760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.240 [2024-07-15 21:44:58.939000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.240 [2024-07-15 21:44:58.939231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.240 [2024-07-15 21:44:58.939240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.240 [2024-07-15 21:44:58.939248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.240 [2024-07-15 21:44:58.942799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.240 [2024-07-15 21:44:58.951797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.240 [2024-07-15 21:44:58.952567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.240 [2024-07-15 21:44:58.952605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.240 [2024-07-15 21:44:58.952616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.240 [2024-07-15 21:44:58.952855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.240 [2024-07-15 21:44:58.953078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.240 [2024-07-15 21:44:58.953086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.240 [2024-07-15 21:44:58.953093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.241 [2024-07-15 21:44:58.956651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.241 [2024-07-15 21:44:58.965652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.241 [2024-07-15 21:44:58.966400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-07-15 21:44:58.966438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.241 [2024-07-15 21:44:58.966449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.241 [2024-07-15 21:44:58.966688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.241 [2024-07-15 21:44:58.966916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.241 [2024-07-15 21:44:58.966925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.241 [2024-07-15 21:44:58.966933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.241 [2024-07-15 21:44:58.970491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.241 [2024-07-15 21:44:58.979504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.241 [2024-07-15 21:44:58.980082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-07-15 21:44:58.980120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.241 [2024-07-15 21:44:58.980139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.241 [2024-07-15 21:44:58.980378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.241 [2024-07-15 21:44:58.980601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.241 [2024-07-15 21:44:58.980610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.241 [2024-07-15 21:44:58.980617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.241 [2024-07-15 21:44:58.984174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.241 [2024-07-15 21:44:58.993382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.241 [2024-07-15 21:44:58.993920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-07-15 21:44:58.993938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.241 [2024-07-15 21:44:58.993946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.241 [2024-07-15 21:44:58.994172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.241 [2024-07-15 21:44:58.994392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.241 [2024-07-15 21:44:58.994400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.241 [2024-07-15 21:44:58.994408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.241 [2024-07-15 21:44:58.997950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.241 [2024-07-15 21:44:59.007365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.241 [2024-07-15 21:44:59.008051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-07-15 21:44:59.008066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.241 [2024-07-15 21:44:59.008073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.241 [2024-07-15 21:44:59.008297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.241 [2024-07-15 21:44:59.008517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.241 [2024-07-15 21:44:59.008524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.241 [2024-07-15 21:44:59.008531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.241 [2024-07-15 21:44:59.012073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.241 [2024-07-15 21:44:59.021283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.241 [2024-07-15 21:44:59.021925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-07-15 21:44:59.021940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.241 [2024-07-15 21:44:59.021947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.241 [2024-07-15 21:44:59.022172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.241 [2024-07-15 21:44:59.022392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.241 [2024-07-15 21:44:59.022400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.241 [2024-07-15 21:44:59.022407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.241 [2024-07-15 21:44:59.025948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.241 [2024-07-15 21:44:59.035165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.241 [2024-07-15 21:44:59.035967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.241 [2024-07-15 21:44:59.036003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.241 [2024-07-15 21:44:59.036014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.241 [2024-07-15 21:44:59.036260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.241 [2024-07-15 21:44:59.036484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.241 [2024-07-15 21:44:59.036493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.241 [2024-07-15 21:44:59.036501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.241 [2024-07-15 21:44:59.040050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.502 [2024-07-15 21:44:59.049050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.502 [2024-07-15 21:44:59.049702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-07-15 21:44:59.049720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.502 [2024-07-15 21:44:59.049728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.502 [2024-07-15 21:44:59.049947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.502 [2024-07-15 21:44:59.050172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.502 [2024-07-15 21:44:59.050180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.502 [2024-07-15 21:44:59.050187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.502 [2024-07-15 21:44:59.053733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.502 [2024-07-15 21:44:59.062936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.502 [2024-07-15 21:44:59.063700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-07-15 21:44:59.063737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.502 [2024-07-15 21:44:59.063752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.502 [2024-07-15 21:44:59.063991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.502 [2024-07-15 21:44:59.064221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.502 [2024-07-15 21:44:59.064230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.502 [2024-07-15 21:44:59.064238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.502 [2024-07-15 21:44:59.067788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.502 [2024-07-15 21:44:59.076783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.502 [2024-07-15 21:44:59.077529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-07-15 21:44:59.077566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.502 [2024-07-15 21:44:59.077577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.502 [2024-07-15 21:44:59.077815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.502 [2024-07-15 21:44:59.078038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.502 [2024-07-15 21:44:59.078046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.502 [2024-07-15 21:44:59.078054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.502 [2024-07-15 21:44:59.081622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.502 [2024-07-15 21:44:59.090636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.502 [2024-07-15 21:44:59.091421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-07-15 21:44:59.091457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.502 [2024-07-15 21:44:59.091468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.502 [2024-07-15 21:44:59.091707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.502 [2024-07-15 21:44:59.091930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.502 [2024-07-15 21:44:59.091938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.502 [2024-07-15 21:44:59.091945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.502 [2024-07-15 21:44:59.095505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.502 [2024-07-15 21:44:59.104507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.502 [2024-07-15 21:44:59.105223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-07-15 21:44:59.105260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.502 [2024-07-15 21:44:59.105272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.502 [2024-07-15 21:44:59.105514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.502 [2024-07-15 21:44:59.105737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.502 [2024-07-15 21:44:59.105750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.502 [2024-07-15 21:44:59.105758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.502 [2024-07-15 21:44:59.109317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.502 [2024-07-15 21:44:59.118315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.502 [2024-07-15 21:44:59.119097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-07-15 21:44:59.119140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.503 [2024-07-15 21:44:59.119151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.503 [2024-07-15 21:44:59.119390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.503 [2024-07-15 21:44:59.119614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.503 [2024-07-15 21:44:59.119622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.503 [2024-07-15 21:44:59.119629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.503 [2024-07-15 21:44:59.123181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.503 [2024-07-15 21:44:59.132190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.503 [2024-07-15 21:44:59.132881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-07-15 21:44:59.132899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.503 [2024-07-15 21:44:59.132906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.503 [2024-07-15 21:44:59.133132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.503 [2024-07-15 21:44:59.133352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.503 [2024-07-15 21:44:59.133359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.503 [2024-07-15 21:44:59.133366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.503 [2024-07-15 21:44:59.136914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.503 [2024-07-15 21:44:59.146120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.503 [2024-07-15 21:44:59.146848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-07-15 21:44:59.146885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.503 [2024-07-15 21:44:59.146895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.503 [2024-07-15 21:44:59.147142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.503 [2024-07-15 21:44:59.147366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.503 [2024-07-15 21:44:59.147375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.503 [2024-07-15 21:44:59.147382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.503 [2024-07-15 21:44:59.150934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.503 [2024-07-15 21:44:59.159938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.503 [2024-07-15 21:44:59.160681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-07-15 21:44:59.160718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.503 [2024-07-15 21:44:59.160729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.503 [2024-07-15 21:44:59.160967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.503 [2024-07-15 21:44:59.161197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.503 [2024-07-15 21:44:59.161207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.503 [2024-07-15 21:44:59.161214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.503 [2024-07-15 21:44:59.164766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.503 [2024-07-15 21:44:59.173767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.503 [2024-07-15 21:44:59.174526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-07-15 21:44:59.174563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.503 [2024-07-15 21:44:59.174574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.503 [2024-07-15 21:44:59.174813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.503 [2024-07-15 21:44:59.175035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.503 [2024-07-15 21:44:59.175044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.503 [2024-07-15 21:44:59.175051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.503 [2024-07-15 21:44:59.178615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.503 [2024-07-15 21:44:59.187613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.503 [2024-07-15 21:44:59.188043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-07-15 21:44:59.188060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.503 [2024-07-15 21:44:59.188068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.503 [2024-07-15 21:44:59.188293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.503 [2024-07-15 21:44:59.188513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.503 [2024-07-15 21:44:59.188521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.503 [2024-07-15 21:44:59.188527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.503 [2024-07-15 21:44:59.192097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.503 [2024-07-15 21:44:59.201524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.503 [2024-07-15 21:44:59.202136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-07-15 21:44:59.202174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.503 [2024-07-15 21:44:59.202188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.503 [2024-07-15 21:44:59.202434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.503 [2024-07-15 21:44:59.202657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.503 [2024-07-15 21:44:59.202665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.503 [2024-07-15 21:44:59.202673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.503 [2024-07-15 21:44:59.206232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.503 [2024-07-15 21:44:59.215442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.503 [2024-07-15 21:44:59.216252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-07-15 21:44:59.216289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.503 [2024-07-15 21:44:59.216300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.503 [2024-07-15 21:44:59.216538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.503 [2024-07-15 21:44:59.216761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.503 [2024-07-15 21:44:59.216769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.503 [2024-07-15 21:44:59.216777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.503 [2024-07-15 21:44:59.220337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.503 [2024-07-15 21:44:59.229335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.503 [2024-07-15 21:44:59.230028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-07-15 21:44:59.230046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.503 [2024-07-15 21:44:59.230053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.503 [2024-07-15 21:44:59.230279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.503 [2024-07-15 21:44:59.230499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.503 [2024-07-15 21:44:59.230506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.503 [2024-07-15 21:44:59.230513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.503 [2024-07-15 21:44:59.234060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.503 [2024-07-15 21:44:59.243273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.503 [2024-07-15 21:44:59.243955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-07-15 21:44:59.243971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.503 [2024-07-15 21:44:59.243978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.503 [2024-07-15 21:44:59.244202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.503 [2024-07-15 21:44:59.244422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.503 [2024-07-15 21:44:59.244430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.503 [2024-07-15 21:44:59.244441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.503 [2024-07-15 21:44:59.247985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.503 [2024-07-15 21:44:59.257194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.503 [2024-07-15 21:44:59.257969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-07-15 21:44:59.258006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.503 [2024-07-15 21:44:59.258016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.503 [2024-07-15 21:44:59.258263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.503 [2024-07-15 21:44:59.258487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.503 [2024-07-15 21:44:59.258495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.503 [2024-07-15 21:44:59.258502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.503 [2024-07-15 21:44:59.262052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.503 [2024-07-15 21:44:59.271054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.503 [2024-07-15 21:44:59.271840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-07-15 21:44:59.271877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.503 [2024-07-15 21:44:59.271888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.503 [2024-07-15 21:44:59.272135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.503 [2024-07-15 21:44:59.272359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.503 [2024-07-15 21:44:59.272367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.503 [2024-07-15 21:44:59.272375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.503 [2024-07-15 21:44:59.275926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.503 [2024-07-15 21:44:59.284938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.503 [2024-07-15 21:44:59.285509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-07-15 21:44:59.285528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.503 [2024-07-15 21:44:59.285535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.503 [2024-07-15 21:44:59.285755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.503 [2024-07-15 21:44:59.285974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.503 [2024-07-15 21:44:59.285981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.503 [2024-07-15 21:44:59.285988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.503 [2024-07-15 21:44:59.289535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.503 [2024-07-15 21:44:59.298739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.503 [2024-07-15 21:44:59.299382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-07-15 21:44:59.299403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.503 [2024-07-15 21:44:59.299410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.503 [2024-07-15 21:44:59.299630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.503 [2024-07-15 21:44:59.299849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.503 [2024-07-15 21:44:59.299856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.503 [2024-07-15 21:44:59.299863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.503 [2024-07-15 21:44:59.303414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.766 [2024-07-15 21:44:59.312621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.766 [2024-07-15 21:44:59.313104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.766 [2024-07-15 21:44:59.313128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.766 [2024-07-15 21:44:59.313137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.766 [2024-07-15 21:44:59.313358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.766 [2024-07-15 21:44:59.313578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.766 [2024-07-15 21:44:59.313585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.766 [2024-07-15 21:44:59.313592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.766 [2024-07-15 21:44:59.317142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.766 [2024-07-15 21:44:59.326557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.766 [2024-07-15 21:44:59.327075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.766 [2024-07-15 21:44:59.327090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.766 [2024-07-15 21:44:59.327097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.766 [2024-07-15 21:44:59.327321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.766 [2024-07-15 21:44:59.327541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.766 [2024-07-15 21:44:59.327549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.766 [2024-07-15 21:44:59.327556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.766 [2024-07-15 21:44:59.331114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.766 [2024-07-15 21:44:59.340542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.766 [2024-07-15 21:44:59.341059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.766 [2024-07-15 21:44:59.341074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.766 [2024-07-15 21:44:59.341082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.766 [2024-07-15 21:44:59.341306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.766 [2024-07-15 21:44:59.341533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.766 [2024-07-15 21:44:59.341540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.766 [2024-07-15 21:44:59.341547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.766 [2024-07-15 21:44:59.345090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.766 [2024-07-15 21:44:59.354503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.766 [2024-07-15 21:44:59.355134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.766 [2024-07-15 21:44:59.355149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.766 [2024-07-15 21:44:59.355156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.766 [2024-07-15 21:44:59.355375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.766 [2024-07-15 21:44:59.355593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.766 [2024-07-15 21:44:59.355601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.766 [2024-07-15 21:44:59.355608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.766 [2024-07-15 21:44:59.359154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.766 [2024-07-15 21:44:59.368357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.766 [2024-07-15 21:44:59.369139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.766 [2024-07-15 21:44:59.369176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.766 [2024-07-15 21:44:59.369188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.766 [2024-07-15 21:44:59.369430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.766 [2024-07-15 21:44:59.369652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.766 [2024-07-15 21:44:59.369661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.766 [2024-07-15 21:44:59.369668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.766 [2024-07-15 21:44:59.373220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.766 [2024-07-15 21:44:59.382233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.766 [2024-07-15 21:44:59.382941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.766 [2024-07-15 21:44:59.382979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.766 [2024-07-15 21:44:59.382989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.766 [2024-07-15 21:44:59.383235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.766 [2024-07-15 21:44:59.383460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.766 [2024-07-15 21:44:59.383469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.766 [2024-07-15 21:44:59.383476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.766 [2024-07-15 21:44:59.387032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.766 [2024-07-15 21:44:59.396032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.766 [2024-07-15 21:44:59.396820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.767 [2024-07-15 21:44:59.396857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.767 [2024-07-15 21:44:59.396868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.767 [2024-07-15 21:44:59.397106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.767 [2024-07-15 21:44:59.397336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.767 [2024-07-15 21:44:59.397346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.767 [2024-07-15 21:44:59.397353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.767 [2024-07-15 21:44:59.400903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.767 [2024-07-15 21:44:59.409907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.767 [2024-07-15 21:44:59.410570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.767 [2024-07-15 21:44:59.410589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.767 [2024-07-15 21:44:59.410597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.767 [2024-07-15 21:44:59.410816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.767 [2024-07-15 21:44:59.411035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.767 [2024-07-15 21:44:59.411043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.767 [2024-07-15 21:44:59.411051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.767 [2024-07-15 21:44:59.414602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.767 [2024-07-15 21:44:59.423811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.767 [2024-07-15 21:44:59.424384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.767 [2024-07-15 21:44:59.424422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.767 [2024-07-15 21:44:59.424432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.767 [2024-07-15 21:44:59.424671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.767 [2024-07-15 21:44:59.424894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.767 [2024-07-15 21:44:59.424902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.767 [2024-07-15 21:44:59.424910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.767 [2024-07-15 21:44:59.428470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.767 [2024-07-15 21:44:59.437692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.767 [2024-07-15 21:44:59.438445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.767 [2024-07-15 21:44:59.438482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.767 [2024-07-15 21:44:59.438498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.767 [2024-07-15 21:44:59.438737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.767 [2024-07-15 21:44:59.438960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.767 [2024-07-15 21:44:59.438968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.767 [2024-07-15 21:44:59.438975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.767 [2024-07-15 21:44:59.442531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.767 [2024-07-15 21:44:59.451534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.767 [2024-07-15 21:44:59.452354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.767 [2024-07-15 21:44:59.452392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.767 [2024-07-15 21:44:59.452404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.767 [2024-07-15 21:44:59.452643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.767 [2024-07-15 21:44:59.452866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.767 [2024-07-15 21:44:59.452874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.767 [2024-07-15 21:44:59.452882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.767 [2024-07-15 21:44:59.456441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.767 [2024-07-15 21:44:59.465531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.767 [2024-07-15 21:44:59.466330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.767 [2024-07-15 21:44:59.466367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.767 [2024-07-15 21:44:59.466378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.767 [2024-07-15 21:44:59.466617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.767 [2024-07-15 21:44:59.466839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.767 [2024-07-15 21:44:59.466848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.767 [2024-07-15 21:44:59.466855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.767 [2024-07-15 21:44:59.470415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.767 [2024-07-15 21:44:59.479430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.767 [2024-07-15 21:44:59.480168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.767 [2024-07-15 21:44:59.480206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.767 [2024-07-15 21:44:59.480218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.767 [2024-07-15 21:44:59.480459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.767 [2024-07-15 21:44:59.480683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.767 [2024-07-15 21:44:59.480696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.767 [2024-07-15 21:44:59.480704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.767 [2024-07-15 21:44:59.484266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.767 [2024-07-15 21:44:59.493271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.767 [2024-07-15 21:44:59.494061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.767 [2024-07-15 21:44:59.494098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.767 [2024-07-15 21:44:59.494110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.767 [2024-07-15 21:44:59.494360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.768 [2024-07-15 21:44:59.494584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.768 [2024-07-15 21:44:59.494593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.768 [2024-07-15 21:44:59.494600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.768 [2024-07-15 21:44:59.498156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.768 [2024-07-15 21:44:59.507161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.768 [2024-07-15 21:44:59.507881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-07-15 21:44:59.507919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.768 [2024-07-15 21:44:59.507930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.768 [2024-07-15 21:44:59.508176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.768 [2024-07-15 21:44:59.508400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.768 [2024-07-15 21:44:59.508410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.768 [2024-07-15 21:44:59.508418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.768 [2024-07-15 21:44:59.511968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.768 [2024-07-15 21:44:59.520971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.768 [2024-07-15 21:44:59.521783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-07-15 21:44:59.521820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.768 [2024-07-15 21:44:59.521831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.768 [2024-07-15 21:44:59.522069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.768 [2024-07-15 21:44:59.522300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.768 [2024-07-15 21:44:59.522309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.768 [2024-07-15 21:44:59.522317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.768 [2024-07-15 21:44:59.525868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.768 [2024-07-15 21:44:59.534881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.768 [2024-07-15 21:44:59.535580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-07-15 21:44:59.535617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.768 [2024-07-15 21:44:59.535628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.768 [2024-07-15 21:44:59.535867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.768 [2024-07-15 21:44:59.536090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.768 [2024-07-15 21:44:59.536098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.768 [2024-07-15 21:44:59.536105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.768 [2024-07-15 21:44:59.539663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.768 [2024-07-15 21:44:59.548870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.768 [2024-07-15 21:44:59.549327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-07-15 21:44:59.549346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.768 [2024-07-15 21:44:59.549353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.768 [2024-07-15 21:44:59.549573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.768 [2024-07-15 21:44:59.549792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.768 [2024-07-15 21:44:59.549800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.768 [2024-07-15 21:44:59.549807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.768 [2024-07-15 21:44:59.553357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.768 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:09.768 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:09.768 21:44:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:09.768 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:09.768 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.768 [2024-07-15 21:44:59.562772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.768 [2024-07-15 21:44:59.563537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-07-15 21:44:59.563574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:09.768 [2024-07-15 21:44:59.563585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:09.768 [2024-07-15 21:44:59.563824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:09.768 [2024-07-15 21:44:59.564047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.768 [2024-07-15 21:44:59.564056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.768 [2024-07-15 21:44:59.564063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.768 [2024-07-15 21:44:59.567624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.030 [2024-07-15 21:44:59.576632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.030 [2024-07-15 21:44:59.577412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-07-15 21:44:59.577450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:10.030 [2024-07-15 21:44:59.577461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:10.030 [2024-07-15 21:44:59.577700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:10.030 [2024-07-15 21:44:59.577923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.030 [2024-07-15 21:44:59.577932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.030 [2024-07-15 21:44:59.577940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.030 [2024-07-15 21:44:59.581508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.030 [2024-07-15 21:44:59.590515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.030 [2024-07-15 21:44:59.591226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-07-15 21:44:59.591263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:10.030 [2024-07-15 21:44:59.591275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:10.030 [2024-07-15 21:44:59.591518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:10.030 [2024-07-15 21:44:59.591741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.030 [2024-07-15 21:44:59.591750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.030 [2024-07-15 21:44:59.591760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.030 [2024-07-15 21:44:59.595320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.030 21:44:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.030 21:44:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:10.030 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.030 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.030 [2024-07-15 21:44:59.604323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.030 [2024-07-15 21:44:59.604430] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.030 [2024-07-15 21:44:59.604973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-07-15 21:44:59.605010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:10.030 [2024-07-15 21:44:59.605021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:10.030 [2024-07-15 21:44:59.605268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:10.030 [2024-07-15 21:44:59.605492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.030 [2024-07-15 21:44:59.605500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.030 [2024-07-15 21:44:59.605508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.030 [2024-07-15 21:44:59.609059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.030 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.030 21:44:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:10.030 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.030 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.030 [2024-07-15 21:44:59.618279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.030 [2024-07-15 21:44:59.618968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-07-15 21:44:59.618987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:10.030 [2024-07-15 21:44:59.618995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:10.030 [2024-07-15 21:44:59.619220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:10.030 [2024-07-15 21:44:59.619440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.030 [2024-07-15 21:44:59.619448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.030 [2024-07-15 21:44:59.619455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.030 [2024-07-15 21:44:59.622999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.030 [2024-07-15 21:44:59.632211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.030 [2024-07-15 21:44:59.632987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.030 [2024-07-15 21:44:59.633024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:10.030 [2024-07-15 21:44:59.633035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:10.030 [2024-07-15 21:44:59.633282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:10.030 [2024-07-15 21:44:59.633506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.030 [2024-07-15 21:44:59.633515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.030 [2024-07-15 21:44:59.633522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.030 [2024-07-15 21:44:59.637078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.030 Malloc0 00:29:10.030 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.030 21:44:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:10.030 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.030 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.031 [2024-07-15 21:44:59.646086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.031 [2024-07-15 21:44:59.646885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-07-15 21:44:59.646922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:10.031 [2024-07-15 21:44:59.646933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:10.031 [2024-07-15 21:44:59.647180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:10.031 [2024-07-15 21:44:59.647403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.031 [2024-07-15 21:44:59.647416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.031 [2024-07-15 21:44:59.647424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.031 [2024-07-15 21:44:59.650972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.031 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.031 21:44:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:10.031 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.031 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.031 [2024-07-15 21:44:59.659982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.031 [2024-07-15 21:44:59.660746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.031 [2024-07-15 21:44:59.660784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c510 with addr=10.0.0.2, port=4420 00:29:10.031 [2024-07-15 21:44:59.660795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c510 is same with the state(5) to be set 00:29:10.031 [2024-07-15 21:44:59.661033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c510 (9): Bad file descriptor 00:29:10.031 [2024-07-15 21:44:59.661264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.031 [2024-07-15 21:44:59.661273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.031 [2024-07-15 21:44:59.661281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.031 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.031 21:44:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.031 [2024-07-15 21:44:59.664832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.031 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.031 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.031 [2024-07-15 21:44:59.671459] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.031 [2024-07-15 21:44:59.673833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.031 21:44:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.031 21:44:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2359822 00:29:10.031 [2024-07-15 21:44:59.712212] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:20.029 00:29:20.029 Latency(us) 00:29:20.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.029 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:20.029 Verification LBA range: start 0x0 length 0x4000 00:29:20.029 Nvme1n1 : 15.01 8102.14 31.65 9605.27 0.00 7203.21 798.72 14199.47 00:29:20.029 =================================================================================================================== 00:29:20.029 Total : 8102.14 31.65 9605.27 0.00 7203.21 798.72 14199.47 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:20.029 rmmod nvme_tcp 00:29:20.029 rmmod nvme_fabrics 00:29:20.029 rmmod nvme_keyring 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2360892 ']' 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2360892 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2360892 ']' 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2360892 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2360892 00:29:20.029 21:45:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:20.030 21:45:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:20.030 21:45:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2360892' 00:29:20.030 killing process with pid 2360892 00:29:20.030 21:45:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2360892 00:29:20.030 21:45:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2360892 00:29:20.030 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:20.030 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:20.030 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:20.030 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:20.030 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:20.030 21:45:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.030 21:45:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:20.030 21:45:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.973 21:45:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:20.973 00:29:20.973 real 0m27.546s 00:29:20.973 user 1m2.383s 00:29:20.973 sys 0m6.973s 00:29:20.973 21:45:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:20.973 21:45:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:20.973 ************************************ 00:29:20.973 END TEST nvmf_bdevperf 00:29:20.973 ************************************ 00:29:20.973 21:45:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:20.973 21:45:10 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:20.973 21:45:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:20.973 21:45:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:20.973 21:45:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.973 ************************************ 00:29:20.973 START TEST nvmf_target_disconnect 00:29:20.973 ************************************ 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:20.973 * Looking for test storage... 00:29:20.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:20.973 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.974 21:45:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:20.974 21:45:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.974 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:20.974 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:20.974 21:45:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:20.974 21:45:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:29.118 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:29.119 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:29.119 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:29.119 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:29.119 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:29.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:29:29.119 00:29:29.119 --- 10.0.0.2 ping statistics --- 00:29:29.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.119 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:29.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:29:29.119 00:29:29.119 --- 10.0.0.1 ping statistics --- 00:29:29.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.119 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:29.119 21:45:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:29.119 ************************************ 00:29:29.119 START TEST nvmf_target_disconnect_tc1 00:29:29.119 ************************************ 00:29:29.119 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:29:29.119 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:29.119 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:29:29.119 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:29.119 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:29.119 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:29.119 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:29.119 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:29.119 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:29.119 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:29.119 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:29.119 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:29.119 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:29.119 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.120 [2024-07-15 21:45:18.114652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.120 [2024-07-15 21:45:18.114699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e63910 with addr=10.0.0.2, port=4420 00:29:29.120 [2024-07-15 21:45:18.114724] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:29.120 [2024-07-15 21:45:18.114738] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:29.120 [2024-07-15 21:45:18.114744] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:29.120 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:29.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:29.120 Initializing NVMe Controllers 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:29.120 00:29:29.120 real 0m0.107s 00:29:29.120 user 0m0.050s 00:29:29.120 sys 0m0.057s 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:29.120 ************************************ 00:29:29.120 END TEST nvmf_target_disconnect_tc1 00:29:29.120 ************************************ 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:29.120 ************************************ 00:29:29.120 START TEST nvmf_target_disconnect_tc2 00:29:29.120 ************************************ 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2367493 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2367493 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2367493 ']' 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:29.120 21:45:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.120 [2024-07-15 21:45:18.276502] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:29:29.120 [2024-07-15 21:45:18.276561] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.120 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.120 [2024-07-15 21:45:18.362872] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:29.120 [2024-07-15 21:45:18.456010] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.120 [2024-07-15 21:45:18.456066] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.120 [2024-07-15 21:45:18.456074] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:29.120 [2024-07-15 21:45:18.456081] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:29.120 [2024-07-15 21:45:18.456087] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.120 [2024-07-15 21:45:18.456242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:29.120 [2024-07-15 21:45:18.456528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:29.120 [2024-07-15 21:45:18.456690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:29.120 [2024-07-15 21:45:18.456692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:29.382 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:29.382 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:29.382 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:29.382 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:29.382 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.382 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:29.382 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.383 Malloc0 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.383 [2024-07-15 21:45:19.131085] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.383 [2024-07-15 21:45:19.159452] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2367722 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:29.383 21:45:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:29.646 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.561 21:45:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2367493 00:29:31.561 21:45:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Write completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Write completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Write completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Write completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Write completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Write completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Write completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Write completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Write completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Write completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Write completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Read completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Write completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 Write completed with error (sct=0, sc=8) 00:29:31.561 starting I/O failed 00:29:31.561 [2024-07-15 21:45:21.187862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.561 [2024-07-15 21:45:21.188433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.561 [2024-07-15 21:45:21.188464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.561 qpair failed and we were unable to recover it. 00:29:31.561 [2024-07-15 21:45:21.188770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.561 [2024-07-15 21:45:21.188779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.561 qpair failed and we were unable to recover it. 00:29:31.561 [2024-07-15 21:45:21.189118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.561 [2024-07-15 21:45:21.189132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.561 qpair failed and we were unable to recover it. 00:29:31.561 [2024-07-15 21:45:21.189626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.561 [2024-07-15 21:45:21.189654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.561 qpair failed and we were unable to recover it. 00:29:31.561 [2024-07-15 21:45:21.190082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.561 [2024-07-15 21:45:21.190091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.561 qpair failed and we were unable to recover it. 00:29:31.561 [2024-07-15 21:45:21.190179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.561 [2024-07-15 21:45:21.190194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.561 qpair failed and we were unable to recover it. 00:29:31.561 [2024-07-15 21:45:21.190596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.561 [2024-07-15 21:45:21.190604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.561 qpair failed and we were unable to recover it. 00:29:31.561 [2024-07-15 21:45:21.190930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.561 [2024-07-15 21:45:21.190937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.561 qpair failed and we were unable to recover it. 00:29:31.561 [2024-07-15 21:45:21.191381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.561 [2024-07-15 21:45:21.191410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.561 qpair failed and we were unable to recover it. 00:29:31.561 [2024-07-15 21:45:21.191843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.561 [2024-07-15 21:45:21.191852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.561 qpair failed and we were unable to recover it. 00:29:31.561 [2024-07-15 21:45:21.192426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.561 [2024-07-15 21:45:21.192455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.561 qpair failed and we were unable to recover it. 00:29:31.561 [2024-07-15 21:45:21.192780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.561 [2024-07-15 21:45:21.192790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.561 qpair failed and we were unable to recover it. 00:29:31.561 [2024-07-15 21:45:21.193355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.193384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.193802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.193811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.194141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.194150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.194514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.194522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.194941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.194949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.195279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.195287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.195652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.195660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.195889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.195896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.196278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.196286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.196702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.196709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.197116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.197127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.197568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.197576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.198001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.198012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.198423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.198431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.198852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.198859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.199379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.199408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.199833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.199842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.200141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.200149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.200468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.200476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.200908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.200915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.201235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.201243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.201670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.201678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.202106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.202114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.202545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.202553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.202882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.202889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.203380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.203409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.203849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.203858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.204295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.204303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.204721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.204728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.205037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.205044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.205338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.205346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.205824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.205832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.206155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.206163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.206515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.206523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.206931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.206938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.207270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.207277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.207721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.207729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.208072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.208079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.208478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.208485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.208867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.208874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.562 [2024-07-15 21:45:21.209281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.562 [2024-07-15 21:45:21.209289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.562 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.209604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.209611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.210037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.210044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.210440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.210447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.210873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.210879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.211163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.211170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.211574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.211581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.211975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.211982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.212448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.212455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.212754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.212760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.213069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.213076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.213408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.213415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.213864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.213872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.214302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.214308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.214736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.214743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.215127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.215134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.215543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.215549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.215865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.215872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.216432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.216459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.216795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.216804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.217326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.217354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.217791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.217800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.218214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.218221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.218645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.218651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.218976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.218982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.219317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.219324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.219738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.219746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.220147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.220155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.220483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.220489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.220875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.220881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.221272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.221279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.221699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.221706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.222163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.222171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.222476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.222482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.222797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.222803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.223229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.223235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.223636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.223642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.224046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.224052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.224393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.224400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.224825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.224832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.225262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.563 [2024-07-15 21:45:21.225268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.563 qpair failed and we were unable to recover it. 00:29:31.563 [2024-07-15 21:45:21.225670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.225677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.226108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.226115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.226518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.226526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.226950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.226957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.227431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.227458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.227900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.227909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.228081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.228089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.228484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.228492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.228929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.228936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.229428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.229455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.229877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.229886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.230407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.230438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.230857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.230865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.231416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.231443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.231848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.231856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.232357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.232384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.232786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.232794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.233183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.233190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.233644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.233650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.233991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.233998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.234355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.234362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.234749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.234756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.235163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.235170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.235537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.235544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.235977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.235983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.236380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.236387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.236809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.236816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.237119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.237133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.237326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.237336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.237720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.237727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.238109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.238115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.238577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.238584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.238979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.238986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.239289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.239317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.239778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.239787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.240302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.240330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.240712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.240720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.241021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.241028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.241301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.241308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.241699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.564 [2024-07-15 21:45:21.241705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.564 qpair failed and we were unable to recover it. 00:29:31.564 [2024-07-15 21:45:21.242075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.242082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.242240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.242249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.242642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.242649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.243046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.243053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.243421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.243427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.243843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.243849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.244228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.244235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.244652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.244658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.244962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.244969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.245329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.245336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.245720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.245727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.246114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.246128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.246406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.246413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.246819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.246825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.247096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.247102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.247489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.247496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.247886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.247893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.248403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.248430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.248890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.248898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.249340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.249367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.249768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.249776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.250155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.250163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.250571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.250577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.251020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.251027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.251429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.251436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.251829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.251836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.252231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.252237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.252728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.252734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.253135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.253142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.253544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.253550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.253929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.253937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.254358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.254366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.254764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.565 [2024-07-15 21:45:21.254772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.565 qpair failed and we were unable to recover it. 00:29:31.565 [2024-07-15 21:45:21.255087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.255093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.255532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.255539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.255996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.256002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.256547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.256574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.256964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.256972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.257469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.257496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.257976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.257984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.258489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.258516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.258842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.258851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.259373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.259400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.259801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.259809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.260362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.260390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.260797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.260805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.261200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.261208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.261528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.261534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.261793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.261801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.262229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.262236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.262707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.262714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.263136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.263146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.263618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.263624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.264008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.264014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.264472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.264479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.264959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.264966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.265256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.265264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.265688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.265695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.266125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.266132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.266492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.266499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.266850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.266856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.267345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.267372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.267777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.267785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.268172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.268179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.268577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.268584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.268991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.268998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.269481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.269488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.269884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.269890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.270337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.270364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.270781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.270790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.271113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.271120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.271509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.271515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.271833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.271840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.566 [2024-07-15 21:45:21.272411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.566 [2024-07-15 21:45:21.272438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.566 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.272898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.272906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.273313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.273340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.273601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.273610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.274029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.274036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.274431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.274439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.274864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.274870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.275279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.275286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.275546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.275554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.275975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.275982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.276329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.276336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.276644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.276650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.277059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.277066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.277167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.277175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.277553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.277560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.277949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.277956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.278357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.278364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.278658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.278664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.279089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.279099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.279380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.279387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.279813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.279820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.280241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.280248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.280635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.280642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.281047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.281054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.281532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.281539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.281848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.281855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.282292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.282298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.282700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.282706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.282916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.282924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.283270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.283277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.283711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.283718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.284148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.284154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.284573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.284580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.284889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.284896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.285359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.285366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.285564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.285572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.285995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.286001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.286391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.286398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.286825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.286832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.287243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.287250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.287743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.287749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.567 [2024-07-15 21:45:21.288030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.567 [2024-07-15 21:45:21.288045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.567 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.288281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.288288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.288661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.288667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.289072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.289079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.289519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.289527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.289914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.289920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.290234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.290241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.290670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.290676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.291101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.291108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.291499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.291506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.291904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.291910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.292465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.292492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.292980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.292988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.293475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.293503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.293942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.293950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.294465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.294492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.294898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.294907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.295427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.295457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.295854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.295862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.296367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.296394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.296822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.296830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.297017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.297025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.297313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.297320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.297717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.297723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.298136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.298143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.298438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.298444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.298926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.298933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.299394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.299401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.299583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.299592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.300024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.300030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.300448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.300454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.300775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.300782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.301166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.301173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.301638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.301645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.302023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.302029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.302382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.302389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.302803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.302810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.303223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.303229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.303634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.303640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.303907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.303914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.568 [2024-07-15 21:45:21.304338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.568 [2024-07-15 21:45:21.304345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.568 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.304752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.304758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.305146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.305153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.305480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.305487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.305872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.305881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.306310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.306317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.306783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.306790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.307217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.307223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.307623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.307630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.307931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.307938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.308328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.308334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.308716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.308722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.309107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.309113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.309543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.309550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.310029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.310035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.310440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.310447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.310844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.310852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.311358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.311386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.311801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.311810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.312031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.312040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.312500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.312508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.312892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.312898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.313234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.313241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.313648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.313655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.314083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.314089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.314558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.314565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.314825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.314833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.315237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.315244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.315628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.315635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.316037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.316045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.316420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.316427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.316855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.316861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.317277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.317284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.317728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.317734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.318139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.318147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.318435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.318442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.318643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.318651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.319071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.319078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.319467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.319474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.569 [2024-07-15 21:45:21.319893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.569 [2024-07-15 21:45:21.319899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.569 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.320220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.320227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.320635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.320641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.320939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.320946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.321346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.321353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.321659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.321668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.321969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.321976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.322381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.322387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.322776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.322782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.323171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.323177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.323631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.323638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.324031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.324038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.324434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.324441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.324854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.324862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.325286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.325293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.325700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.325707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.326136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.326144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.326526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.326532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.326924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.326931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.327359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.327366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.327833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.327840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.328033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.328042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.328342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.328350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.328767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.328774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.329178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.329186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.329596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.329602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.329913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.329920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.330318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.330325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.330710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.330716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.331146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.331153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.331561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.331567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.331965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.331971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.332357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.332365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.332765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.332772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.333176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.333183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.333493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.333500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.333914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.333920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.570 qpair failed and we were unable to recover it. 00:29:31.570 [2024-07-15 21:45:21.334372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.570 [2024-07-15 21:45:21.334379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.334818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.334824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.335222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.335237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.335621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.335628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.335931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.335938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.336253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.336260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.336704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.336710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.337164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.337171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.337466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.337474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.337889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.337896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.338209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.338215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.338619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.338625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.339011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.339018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.339393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.339399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.339819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.339826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.340137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.340144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.340548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.340555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.340850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.340857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.341265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.341272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.341629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.341636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.342065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.342071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.342556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.342562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.342941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.342947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.343342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.343348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.343757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.343763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.344148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.344156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.344561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.344567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.345029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.345036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.345424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.345431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.345690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.345697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.346116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.346126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.346555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.346561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.346960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.346967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.347484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.347511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.347911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.347919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.348456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.348483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.348919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.348927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.349446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.349474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.349876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.349884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.350413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.350440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.350897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.571 [2024-07-15 21:45:21.350905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.571 qpair failed and we were unable to recover it. 00:29:31.571 [2024-07-15 21:45:21.351345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.351372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.351762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.351770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.352080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.352087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.352502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.352509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.352904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.352910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.353409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.353436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.353842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.353850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.354353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.354384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.354781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.354790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.355104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.355111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.355557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.355564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.355967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.355974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.356488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.356516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.356912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.356920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.357465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.357492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.357904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.357913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.358336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.358363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.358854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.358863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.359360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.359387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.359843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.359851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.360369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.360396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.360608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.360618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.361043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.361050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.361462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.361469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.361875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.361881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.362273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.362280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.572 [2024-07-15 21:45:21.362702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.572 [2024-07-15 21:45:21.362708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.572 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.363145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.363153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.363321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.363329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.363819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.363826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.364317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.364325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.364728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.364735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.365177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.365184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.365601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.365609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.366022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.366029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.366474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.366481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.366865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.366871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.367253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.367260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.367645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.367653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.368084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.368091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.368525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.368531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.368926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.368933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.369432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.369459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.369925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.369933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.370461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.845 [2024-07-15 21:45:21.370490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.845 qpair failed and we were unable to recover it. 00:29:31.845 [2024-07-15 21:45:21.370899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.370907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.371411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.371438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.371910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.371924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.372486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.372513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.372919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.372928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.373427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.373455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.373883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.373892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.374414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.374441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.374846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.374854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.375391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.375418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.375819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.375827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.376341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.376368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.376785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.376793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.377189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.377196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.377612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.377619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.377923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.377930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.378284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.378292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.378686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.378692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.378957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.378963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.379373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.379380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.379686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.379692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.379935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.379943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.380338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.380345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.380727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.380734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.381169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.381176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.381590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.381597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.382029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.382035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.382335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.382342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.382732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.382738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.383125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.383132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.383519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.383525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.383931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.383937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.384367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.384395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.384813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.384821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.385257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.385265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.385337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.385346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.385724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.846 [2024-07-15 21:45:21.385731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.846 qpair failed and we were unable to recover it. 00:29:31.846 [2024-07-15 21:45:21.386180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.386188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.386686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.386692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.387126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.387132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.387526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.387533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.387844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.387850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.388171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.388181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.388467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.388473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.388886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.388892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.389320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.389326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.389793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.389799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.390238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.390244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.390728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.390735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.391132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.391140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.391438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.391444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.391853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.391861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.392264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.392271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.392681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.392687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.393065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.393072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.393473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.393480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.393895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.393901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.394371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.394378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.394770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.394776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.395212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.395219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.395602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.395608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.395935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.395943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.396350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.396357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.396745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.396751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.397155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.397162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.397566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.397572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.397960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.397966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.398351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.398358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.398766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.398772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.399196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.399203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.399621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.399628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.400055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.400061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.400519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.400526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.847 [2024-07-15 21:45:21.400909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.847 [2024-07-15 21:45:21.400915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.847 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.401293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.401301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.401705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.401711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.402093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.402100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.402494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.402500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.402902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.402909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.403355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.403382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.403707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.403715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.403997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.404004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.404407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.404417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.404816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.404822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.405345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.405372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.405861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.405869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.406370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.406397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.406805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.406813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.407197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.407205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.407552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.407559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.407768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.407778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.408157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.408165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.408583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.408589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.408975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.408981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.409432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.409438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.409824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.409830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.410267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.410274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.410668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.410674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.410998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.411005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.411403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.411410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.411711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.411718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.412086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.412093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.412503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.412510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.412811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.412817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.413209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.413216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.413628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.413635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.413843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.413849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.414221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.414228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.414635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.414641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.415056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.415062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.415509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.415516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.848 [2024-07-15 21:45:21.415812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.848 [2024-07-15 21:45:21.415819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.848 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.416113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.416120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.416485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.416492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.416909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.416917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.417426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.417454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.417852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.417861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.418357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.418384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.418812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.418821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.419233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.419241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.419634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.419641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.419942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.419949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.420253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.420264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.420677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.420683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.420883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.420892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.421315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.421322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.421720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.421726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.422017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.422024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.422430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.422436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.422835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.422842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.423276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.423283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.423686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.423693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.424142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.424149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.424513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.424519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.424828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.424835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.425242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.425249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.425694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.425702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.426102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.426110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.426536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.426543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.426928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.426936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.427242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.427250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.427657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.427665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.428090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.428097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.428496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.428504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.428808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.428815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.429262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.429269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.429661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.429667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.430078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.430085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.849 [2024-07-15 21:45:21.430473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.849 [2024-07-15 21:45:21.430480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.849 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.430866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.430872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.431299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.431306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.431509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.431518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.431916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.431922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.432336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.432343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.432646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.432653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.433085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.433091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.433478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.433485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.433892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.433899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.434330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.434337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.434716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.434722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.435135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.435142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.435519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.435526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.435951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.435960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.436340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.436347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.436770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.436777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.437174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.437181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.437461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.437467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.437895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.437902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.438310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.438317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.438614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.438621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.439047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.439053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.439486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.439492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.439890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.439896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.440220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.440227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.440701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.440708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.440967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.440975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.441394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.441401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.441802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.441809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.442211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.442218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.442643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.442650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.443155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.443162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.443548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.443554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.443945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.443952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.444199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.850 [2024-07-15 21:45:21.444206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.850 qpair failed and we were unable to recover it. 00:29:31.850 [2024-07-15 21:45:21.444523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.444529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.444913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.444920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.445321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.445328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.445750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.445756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.446179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.446185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.446572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.446579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.446778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.446786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.447159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.447167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.447486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.447492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.447882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.447888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.448280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.448287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.448562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.448569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.448751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.448759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.449146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.449153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.449558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.449564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.450037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.450044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.450440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.450447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.450762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.450769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.451077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.451087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.451524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.451531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.451912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.451919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.452332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.452339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.452757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.452764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.453191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.453198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.453588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.453595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.454009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.454015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.454422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.454428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.454818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.454824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.455211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.455218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.455601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.455609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.456040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.456046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.456325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.456332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.456617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.456624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.457058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.457064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.457465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.457472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.457859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.457866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.458270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.458277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.458584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.458591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.459026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.459032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.851 [2024-07-15 21:45:21.459443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.851 [2024-07-15 21:45:21.459449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.851 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.459874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.459881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.460203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.460210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.460621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.460627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.461018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.461025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.461429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.461436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.461856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.461863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.462284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.462291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.462726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.462733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.463133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.463140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.463558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.463565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.463743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.463751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.464138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.464145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.464541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.464548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.464866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.464872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.465265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.465272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.465695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.465701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.466102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.466108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.466426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.466433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.466858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.466866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.467252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.467258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.467676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.467683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.467991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.467998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.468414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.468420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.468616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.468623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.469008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.469014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.469306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.469313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.469628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.469634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.470024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.470030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.470419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.470426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.470830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.470838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.471229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.471236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.471624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.471630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.472020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.472027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.472441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.472448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.472837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.472845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.473310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.473317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.852 [2024-07-15 21:45:21.473699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.852 [2024-07-15 21:45:21.473705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.852 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.474091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.474097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.474482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.474489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.474911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.474917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.475401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.475428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.475824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.475834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.476311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.476319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.476716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.476723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.477128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.477135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.477516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.477523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.477940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.477946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.478350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.478377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.478799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.478807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.479325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.479352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.479668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.479677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.480130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.480138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.480534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.480540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.480951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.480957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.481476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.481504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.481959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.481967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.482496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.482523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.482926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.482935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.483457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.483487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.483882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.483891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.484509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.484537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.484935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.484943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.485458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.485486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.485918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.485926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.486493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.486520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.486920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.486928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.487449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.487476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.487684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.487693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.488068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.488075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.488380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.488388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.488797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.488804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.489202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.489210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.489607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.489613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.490002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.490009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.490477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.490484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.490748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.490755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.853 [2024-07-15 21:45:21.491166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.853 [2024-07-15 21:45:21.491173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.853 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.491663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.491670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.492069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.492075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.492475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.492482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.492867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.492873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.493267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.493273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.493696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.493702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.494021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.494027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.494422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.494428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.494817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.494824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.495209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.495216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.495603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.495610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.496014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.496021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.496434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.496441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.496845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.496851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.497294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.497301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.497707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.497713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.498114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.498125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.498541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.498548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.499001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.499007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.499268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.499275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.499680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.499687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.500186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.500195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.500446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.500454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.500850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.500856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.501240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.501246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.501671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.501679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.502106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.502112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.502590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.502597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.503001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.503008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.503411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.503418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.503815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.503821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.504216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.854 [2024-07-15 21:45:21.504223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.854 qpair failed and we were unable to recover it. 00:29:31.854 [2024-07-15 21:45:21.504596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.504603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.505017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.505023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.505477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.505483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.505791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.505798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.506210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.506217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.506626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.506632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.507032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.507039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.507461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.507468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.507894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.507901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.508329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.508336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.508719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.508725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.509107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.509114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.509507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.509514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.509941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.509947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.510457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.510483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.510883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.510892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.511100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.511112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.511578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.511586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.511929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.511936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.512472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.512499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.512899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.512907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.513449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.513477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.513837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.513845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.514328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.514355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.514755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.514763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.515194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.515201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.515626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.515632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.516037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.516044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.516435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.516442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.516904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.516911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.517302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.517309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.517689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.517696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.518063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.518070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.518445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.518451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.518649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.518659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.519022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.519029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.519430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.519437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.519866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.519872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.520256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.520263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.520686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.520692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.521149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.521156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.521459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.521466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.521890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.521896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.855 [2024-07-15 21:45:21.522159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.855 [2024-07-15 21:45:21.522167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.855 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.522576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.522583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.522873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.522880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.523272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.523278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.523668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.523674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.524059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.524066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.524463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.524469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.524898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.524905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.525308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.525315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.525743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.525750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.526174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.526181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.526584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.526591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.526998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.527005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.527432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.527441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.527865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.527872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.528366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.528394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.528806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.528815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.529223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.529231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.529654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.529661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.530049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.530055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.530373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.530381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.530799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.530806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.531193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.531199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.531567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.531574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.532052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.532059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.532508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.532515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.532713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.532723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.533131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.533138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.533453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.533461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.533865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.533871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.534255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.534262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.534580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.534586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.535016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.535022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.535430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.535437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.535821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.535828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.536249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.536256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.536656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.536663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.537063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.537070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.537466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.537473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.537856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.537863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.538262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.538269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.538657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.538663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.538866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.538875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.539214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.539221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.539636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.539643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.540047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.540054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.540357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.540364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.540765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.540772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.541166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.856 [2024-07-15 21:45:21.541173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.856 qpair failed and we were unable to recover it. 00:29:31.856 [2024-07-15 21:45:21.541584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.541591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.542012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.542018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.542478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.542485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.542790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.542797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.543207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.543222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.543485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.543492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.543912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.543918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.544330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.544337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.544719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.544725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.545110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.545116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.545545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.545552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.545957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.545965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.546478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.546505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.546910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.546918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.547410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.547437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.547839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.547847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.548050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.548060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.548473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.548480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.548930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.548938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.549446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.549473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.549686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.549695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.550100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.550107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.550514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.550522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.550928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.550935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.551332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.551359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.551668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.551677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.552105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.552112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.552520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.552527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.552909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.552915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.553434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.553461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.553773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.553782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.554201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.554209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.554605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.554611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.555044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.555051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.555455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.555462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.555850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.555857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.556252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.556258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.556668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.556674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.557116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.557127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.557542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.557549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.557755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.557764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.558169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.558176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.558590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.558596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.559004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.559010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.559395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.559405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.559824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.559831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.560258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.560264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.560739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.560746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.561136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.561143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.561545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.561551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.857 qpair failed and we were unable to recover it. 00:29:31.857 [2024-07-15 21:45:21.561986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.857 [2024-07-15 21:45:21.561992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.562517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.562544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.562943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.562952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.563351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.563378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.563838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.563847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.564337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.564365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.564823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.564831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.565136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.565144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.565537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.565544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.565927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.565934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.566429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.566456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.566858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.566866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.567403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.567430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.567832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.567840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.568133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.568142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.568536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.568543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.569001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.569008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.569491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.569518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.569734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.569744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.570165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.570173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.570563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.570570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.570973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.570980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.571389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.571396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.571828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.571835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.572273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.572279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.572693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.572700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.573102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.573108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.573519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.573526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.573935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.573941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.574448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.574476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.574893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.574901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.575326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.575354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.575784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.575792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.576288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.576315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.576772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.576783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.577217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.577224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.577611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.577618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.578015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.578021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.578445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.578452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.578841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.578848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.579239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.579247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.579539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.579546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.579953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.579960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.580169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.580179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.858 [2024-07-15 21:45:21.580592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.858 [2024-07-15 21:45:21.580599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.858 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.580898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.580905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.581314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.581321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.581705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.581711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.582102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.582109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.582506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.582513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.582896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.582902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.583329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.583335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.583528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.583536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.583958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.583965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.584361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.584368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.584568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.584576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.584990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.584997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.585380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.585387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.585797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.585803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.586241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.586248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.586677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.586684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.587087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.587095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.587500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.587506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.587931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.587938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.588429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.588457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.588766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.588774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.589192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.589199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.589602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.589609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.590074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.590081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.590488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.590495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.590808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.590815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.591255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.591262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.591648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.591654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.592047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.592053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.592299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.592309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.592670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.592677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.592941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.592949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.593365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.593372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.593755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.593761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.594188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.594195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.594617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.594623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.595010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.595017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.595433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.595440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.595865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.595872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.596301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.596308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.596688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.596694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.597079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.597085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.597466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.597473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.597865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.597871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.598276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.598282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.598662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.598668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.599093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.599100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.599530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.599537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.599940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.599947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.600435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.600462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.859 [2024-07-15 21:45:21.600863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.859 [2024-07-15 21:45:21.600871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.859 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.601313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.601340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.601742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.601751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.602158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.602166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.602568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.602575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.602871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.602883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.603295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.603303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.603706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.603713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.604132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.604139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.604523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.604529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.604952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.604958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.605343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.605349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.605648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.605661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.606043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.606049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.606335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.606342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.606685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.606692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.607076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.607083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.607380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.607387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.607812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.607819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.608201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.608210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.608649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.608656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.608916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.608924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.609334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.609341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.609724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.609730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.610026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.610033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.610432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.610439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.610737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.610744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.611153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.611160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.611567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.611574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.612001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.612008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.612516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.612522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.612929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.612935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.613238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.613245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.613649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.613655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.614037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.614043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.614338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.614345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.614739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.614746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.615131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.615137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.615551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.615558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.615864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.615872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.616258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.616265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.616694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.616700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.617085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.617091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.617322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.617329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.617719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.617725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.618162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.618169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.618581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.618587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.618978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.618985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.619413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.619420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.619804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.619810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.860 [2024-07-15 21:45:21.620275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.860 [2024-07-15 21:45:21.620282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.860 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.620578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.620593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.620998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.621004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.621297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.621311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.621724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.621730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.622115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.622125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.622510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.622517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.622581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.622591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.622953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.622960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.623350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.623359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.623750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.623756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.624141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.624147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.624595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.624601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.625023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.625030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.625406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.625413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.625836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.625843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.626269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.626276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.626702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.626709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.627117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.627128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.627543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.627550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.627970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.627977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.628458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.628486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.628902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.628911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.629433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.629460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.629891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.629899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.630414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.630442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.630840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.630848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.631361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.631388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.631792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.631800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.632186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.632193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.632588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.632594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.632985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.632992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.633391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.633399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.633808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.633815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.634314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.634341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.634675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.634683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.635040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.635047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.635451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.635458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.635886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.635892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.636286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.636292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.636714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.636721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:31.861 [2024-07-15 21:45:21.637133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.861 [2024-07-15 21:45:21.637141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:31.861 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.637560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.637568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.637996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.638004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.638459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.638466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.638892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.638898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.639499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.639527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.639947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.639955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.640532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.640559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.640905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.640916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.641417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.641445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.641912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.641920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.642493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.642521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.642981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.642989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.643461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.643488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.643941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.643959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.644503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.644531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.645000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.645009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.645547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.645575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.645995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.646004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.646541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.646569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.646987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.646996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.647556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.647584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.647790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.647800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.648325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.648353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.648769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.145 [2024-07-15 21:45:21.648777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.145 qpair failed and we were unable to recover it. 00:29:32.145 [2024-07-15 21:45:21.649087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.649094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.649510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.649517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.649952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.649959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.650473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.650501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.650807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.650816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.651330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.651358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.651805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.651814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.652216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.652224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.652652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.652660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.653094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.653102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.653529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.653536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.653943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.653951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.654162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.654172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.654589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.654597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.655027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.655035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.655546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.655575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.655888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.655897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.656435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.656464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.656893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.656902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.657403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.657432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.657733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.657742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.658172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.658181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.658607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.658613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.658998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.659009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.659478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.659486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.659887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.659894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.660278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.660285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.660698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.660704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.661133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.661141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.661424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.661430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.661709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.661721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.662129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.662136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.662320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.662330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.662770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.662777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.663163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.663170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.663541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.663549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.663850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.663857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.664268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.664275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.664671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.664678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.665128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.665136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.146 qpair failed and we were unable to recover it. 00:29:32.146 [2024-07-15 21:45:21.665560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.146 [2024-07-15 21:45:21.665566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.666030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.666037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.666437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.666444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.666867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.666875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.667283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.667290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.667715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.667721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.668030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.668037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.668454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.668461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.668856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.668864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.669287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.669294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.669683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.669690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.670159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.670167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.670588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.670596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.670979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.670985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.671186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.671194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.671613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.671620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.672005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.672012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.672415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.672421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.672875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.672881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.673265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.673271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.673700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.673707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.674135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.674143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.674329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.674337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.674636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.674645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.675050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.675056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.675289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.675296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.675572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.675579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.675986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.675992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.676385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.676392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.676779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.676785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.677146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.677153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.677652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.677658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.678040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.678047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.678376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.678384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.678799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.678805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.679192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.679199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.679613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.679620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.680021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.680027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.680491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.680497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.680919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.680926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.681333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.681340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.147 qpair failed and we were unable to recover it. 00:29:32.147 [2024-07-15 21:45:21.681727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.147 [2024-07-15 21:45:21.681733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.682118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.682138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.682326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.682334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.682803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.682811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.683248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.683255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.683646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.683654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.684097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.684103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.684511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.684518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.684906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.684913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.685472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.685499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.685996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.686004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.686502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.686529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.686930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.686938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.687476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.687503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.687921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.687929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.688434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.688462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.688873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.688881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.689443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.689471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.689875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.689883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.690408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.690435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.690745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.690754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.691156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.691163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.691474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.691484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.691973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.691981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.692373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.692380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.692773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.692780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.693207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.693213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.693601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.693607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.694001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.694008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.694401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.694408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.694789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.694796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.695227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.695234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.695640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.695647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.696057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.696064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.696484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.696491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.696872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.696879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.697336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.697343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.697729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.697736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.698126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.698134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.698563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.698570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.699001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.148 [2024-07-15 21:45:21.699008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.148 qpair failed and we were unable to recover it. 00:29:32.148 [2024-07-15 21:45:21.699310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.699317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.699642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.699648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.700127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.700134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.700544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.700550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.700971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.700978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.701479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.701506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.701905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.701914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.702401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.702429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.702860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.702868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.703363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.703390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.703789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.703797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.704347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.704374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.704682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.704691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.705099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.705106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.705572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.705579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.705983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.705990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.706504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.706531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.706931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.706939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.707429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.707456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.707664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.707673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.708093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.708100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.708491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.708502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.708890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.708897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.709396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.709423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.709821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.709829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.710236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.710243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.710551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.710557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.710992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.710999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.711386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.711393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.711800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.711807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.712304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.712332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.712747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.712755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.713178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.713185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.713613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.149 [2024-07-15 21:45:21.713620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.149 qpair failed and we were unable to recover it. 00:29:32.149 [2024-07-15 21:45:21.714088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.714095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.714498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.714505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.714942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.714949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.715321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.715348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.715803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.715811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.716203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.716210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.716395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.716403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.716804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.716811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.717241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.717247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.717622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.717630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.718061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.718068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.718479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.718487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.718892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.718899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.719288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.719295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.719682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.719692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.720116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.720128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.720550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.720556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.720944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.720950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.721470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.721498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.721918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.721926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.722430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.722457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.722886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.722894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.723417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.723444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.723853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.723862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.724399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.724427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.724909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.724918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.725401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.725428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.725828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.725837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.726359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.726386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.726815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.726824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.727142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.727149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.727395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.727402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.727681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.727688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.728118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.728130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.728544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.728551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.728955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.728962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.729368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.729376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.729702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.729709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.730117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.730131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.730519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.730525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.730928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.150 [2024-07-15 21:45:21.730934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.150 qpair failed and we were unable to recover it. 00:29:32.150 [2024-07-15 21:45:21.731473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.731501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.731917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.731925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.732426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.732454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.732859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.732867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.733410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.733437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.733922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.733930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.734430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.734458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.734858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.734866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.735429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.735456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.735860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.735868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.736384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.736412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.736811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.736819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.737210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.737217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.737609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.737619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.738048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.738055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.738459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.738466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.738769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.738776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.739168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.739175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.739599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.739605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.740019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.740025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.740205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.740215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.740635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.740641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.741027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.741033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.741447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.741454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.741873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.741880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.742308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.742316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.742719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.742726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.743127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.743135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.743513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.743519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.743923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.743930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.744425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.744453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.744777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.744785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.745207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.745214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.745611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.745618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.746012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.746019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.746438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.746445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.746708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.746714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.747039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.747046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.747459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.747466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.747868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.747874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.748258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.151 [2024-07-15 21:45:21.748265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.151 qpair failed and we were unable to recover it. 00:29:32.151 [2024-07-15 21:45:21.748694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.748701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.749104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.749112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.749529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.749536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.749923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.749931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.750457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.750484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.750892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.750900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.751395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.751429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.751825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.751834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.752052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.752059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.752433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.752440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.752875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.752881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.753407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.753435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.753869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.753880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.754372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.754399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.754800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.754808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.755261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.755269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.755675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.755681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.756069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.756076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.756482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.756488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.756913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.756921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.757459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.757486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.757889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.757897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.758424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.758451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.758878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.758886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.759416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.759444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.759845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.759853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.760352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.760380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.760805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.760813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.761240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.761247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.761576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.761584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.761905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.761912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.762298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.762305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.762717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.762723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.763188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.763194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.763576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.763582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.763974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.763981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.764409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.764416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.764863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.764869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.765384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.765411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.765847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.765855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.152 [2024-07-15 21:45:21.766349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.152 [2024-07-15 21:45:21.766376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.152 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.766777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.766785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.767179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.767186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.767600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.767606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.768038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.768044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.768452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.768458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.768890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.768898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.769319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.769326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.769710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.769718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.770113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.770120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.770527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.770534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.770920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.770926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.771440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.771470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.771865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.771873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.772390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.772417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.772899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.772908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.773425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.773452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.773850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.773858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.774384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.774412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.774848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.774856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.775351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.775379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.775860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.775869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.776362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.776389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.776733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.776742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.776934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.776945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.777323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.777330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.777646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.777653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.778050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.778056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.778473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.778480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.778862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.778869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.779323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.779330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.779718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.779725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.780153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.780159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.780566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.780572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.781050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.781056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.781263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.781271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.153 [2024-07-15 21:45:21.781706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.153 [2024-07-15 21:45:21.781713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.153 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.781997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.782005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.782432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.782439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.782866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.782872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.783303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.783310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.783695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.783702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.784096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.784103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.784519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.784526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.784916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.784923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.785423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.785451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.785866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.785875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.786411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.786439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.786838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.786846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.787064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.787073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.787500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.787507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.787939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.787945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.788469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.788499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.788951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.788959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.789451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.789478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.789879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.789887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.790373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.790400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.790801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.790810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.791019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.791028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.791445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.791453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.791884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.791890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.792150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.792158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.792330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.792339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.792650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.792657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.793081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.793088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.793522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.793528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.793951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.793958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.794390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.794397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.794790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.794796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.795001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.795009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.795313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.795320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.795597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.795604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.796075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.796081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.796485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.796491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.796888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.796895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.797320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.797328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.797722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.797729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.154 [2024-07-15 21:45:21.798138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.154 [2024-07-15 21:45:21.798145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.154 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.798518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.798525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.798980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.798987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.799379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.799387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.799817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.799825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.800248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.800255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.800689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.800696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.801090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.801096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.801498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.801506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.801929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.801936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.802138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.802148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.802561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.802568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.802950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.802956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.803342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.803349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.803780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.803787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.804213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.804222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.804634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.804640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.805061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.805067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.805479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.805486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.805886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.805894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.806297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.806304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.806688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.806694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.807084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.807090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.807476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.807483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.807866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.807872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.808291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.808318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.808773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.808781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.809211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.809219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.809643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.809650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.810061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.810068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.810440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.810447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.810837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.810843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.811044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.811053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.811366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.811373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.811779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.811785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.812226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.812233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.812625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.812632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.813042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.813049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.813466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.813473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.813859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.813866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.814161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.814168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.155 [2024-07-15 21:45:21.814485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.155 [2024-07-15 21:45:21.814491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.155 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.814901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.814907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.815371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.815378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.815853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.815859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.816241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.816248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.816661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.816668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.817092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.817099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.817533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.817540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.817926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.817933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.818456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.818483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.818911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.818919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.819416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.819444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.819852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.819860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.820318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.820346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.820781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.820793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.821224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.821231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.821636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.821643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.822082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.822088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.822408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.822415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.822817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.822824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.823202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.823208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.823644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.823651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.823949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.823956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.824365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.824371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.824763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.824769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.825193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.825199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.825707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.825714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.826096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.826103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.826503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.826510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.826809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.826816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.827256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.827263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.827650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.827657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.827959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.827966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.828391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.828399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.828830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.828837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.829357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.829385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.829800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.829808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.830211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.830219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.830655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.830661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.831053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.831059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.831440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.831447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.156 [2024-07-15 21:45:21.831904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.156 [2024-07-15 21:45:21.831910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.156 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.832438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.832465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.832868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.832877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.833377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.833404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.833827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.833835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.834251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.834259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.834647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.834653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.835039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.835045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.835514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.835521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.835907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.835914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.836425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.836452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.836854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.836862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.837152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.837167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.837626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.837636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.838043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.838050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.838453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.838460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.838882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.838890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.839185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.839192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.839606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.839613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.839810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.839820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.840200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.840207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.840662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.840668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.841055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.841062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.841474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.841482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.841908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.841915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.842344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.842351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.842774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.842780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.843166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.843174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.843554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.843560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.843993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.843999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.844408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.844414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.844839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.844845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.845380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.845408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.845834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.845843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.846242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.846249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.846659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.846666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.847167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.847174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.847548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.847555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.847940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.847946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.848254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.848261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.848475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.848486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.157 qpair failed and we were unable to recover it. 00:29:32.157 [2024-07-15 21:45:21.848889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.157 [2024-07-15 21:45:21.848896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.849332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.849339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.849756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.849762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.850195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.850202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.850588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.850594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.850977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.850983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.851301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.851308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.851709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.851715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.852137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.852145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.852549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.852556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.852996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.853002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.853445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.853452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.853894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.853903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.854409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.854436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.854840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.854848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.855356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.855388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.855788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.855796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.856223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.856230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.856674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.856682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.857069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.857076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.857469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.857475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.857860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.857867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.858291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.858298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.858674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.858680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.859093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.859100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.859505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.859512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.859898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.859905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.860421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.860448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.860922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.860930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.861463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.861490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.861895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.861903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.862428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.862455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.862904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.862913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.863414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.863442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.863758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.863768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.864085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.158 [2024-07-15 21:45:21.864092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.158 qpair failed and we were unable to recover it. 00:29:32.158 [2024-07-15 21:45:21.864471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.864479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.864886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.864893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.865410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.865438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.865866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.865875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.866403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.866431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.866842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.866852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.867367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.867394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.867705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.867714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.868147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.868155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.868478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.868485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.868882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.868890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.869324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.869331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.869730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.869737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.870057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.870064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.870489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.870497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.870944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.870951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.871357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.871368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.871630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.871637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.871911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.871918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.872339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.872346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.872773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.872781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.873186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.873193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.873488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.873496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.873786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.873793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.874216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.874224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.874639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.874647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.875041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.875049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.875431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.875438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.875866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.875873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.876282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.876289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.876711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.876718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.877145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.877153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.877453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.877461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.877866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.877873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.878276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.878284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.878687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.878694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.878994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.879001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.879401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.879408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.879812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.879820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.880244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.880251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.159 [2024-07-15 21:45:21.880521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.159 [2024-07-15 21:45:21.880529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.159 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.880837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.880844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.881325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.881332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.881731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.881738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.882164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.882172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.882573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.882580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.882974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.882980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.883365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.883372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.883638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.883645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.884041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.884048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.884439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.884445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.884870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.884877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.885209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.885216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.885526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.885532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.885919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.885925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.886321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.886328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.886745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.886752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.887171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.887178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.887578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.887585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.888001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.888008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.888399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.888405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.888789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.888795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.889182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.889189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.889470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.889478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.889863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.889870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.890177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.890185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.890591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.890597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.891017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.891023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.891471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.891478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.891860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.891866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.892236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.892243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.892553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.892560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.892869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.892875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.893263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.893269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.893683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.893690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.893899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.893909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.894335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.894342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.894747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.894754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.895149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.895157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.895551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.895557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.895961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.895968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.160 [2024-07-15 21:45:21.896351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.160 [2024-07-15 21:45:21.896358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.160 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.896777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.896783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.897176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.897184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.897646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.897652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.897851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.897859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.898322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.898329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.898712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.898719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.899105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.899112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.899507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.899515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.899921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.899928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.900416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.900443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.900758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.900766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.900979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.900988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.901401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.901409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.901804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.901810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.902241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.902251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.902686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.902693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.903127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.903134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.903561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.903568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.903761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.903770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.904181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.904188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.904645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.904653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.904971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.904979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.905401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.905408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.905728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.905734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.906035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.906042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.906441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.906447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.906841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.906848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.907231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.907237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.907657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.907663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.908091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.908097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.908486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.908493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.908883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.908890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.909410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.909438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.909878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.909887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.910401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.910428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.910828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.910836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.911223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.911230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.911602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.911610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.912039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.912046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.912445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.912451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.161 [2024-07-15 21:45:21.912835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.161 [2024-07-15 21:45:21.912842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.161 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.913225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.913232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.913632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.913638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.914040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.914047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.914451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.914458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.914881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.914888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.915097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.915107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.915474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.915482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.915794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.915801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.916187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.916194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.916617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.916623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.916839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.916846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.917280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.917287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.917760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.917767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.918056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.918065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.918461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.918468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.918890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.918897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.919335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.919342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.919773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.919780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.920182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.920188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.920576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.920582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.921012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.921019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.921314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.921321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.921732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.921738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.921921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.921928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.922358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.922365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.922568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.922575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.922996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.923003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.923387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.923394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.923768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.923775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.924197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.924204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.924609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.924615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.925045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.925052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.925436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.925443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.925831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.925838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.926268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.162 [2024-07-15 21:45:21.926274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.162 qpair failed and we were unable to recover it. 00:29:32.162 [2024-07-15 21:45:21.926665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.926672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.927098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.927105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.927503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.927510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.927913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.927921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.928414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.928442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.928870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.928879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.929340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.929368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.929784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.929792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.930203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.930210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.930657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.930664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.930963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.930971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.931364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.931371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.931758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.931764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.932150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.932158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.932578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.932585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.932989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.932996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.933203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.933212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.933664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.933671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.934065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.934071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.934468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.934476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.934880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.934886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.935278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.935286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.935658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.935665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.936048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.936055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.936446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.936453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.936837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.936845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.937228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.937235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.937649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.937656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.938064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.938071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.938286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.938295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.938711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.938718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.939155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.939162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.939567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.939574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.163 [2024-07-15 21:45:21.939998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.163 [2024-07-15 21:45:21.940005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.163 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.940465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.940474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.940900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.940907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.941451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.941478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.941879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.941888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.942372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.942400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.942800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.942808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.943198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.943206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.943630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.943637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.944056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.944063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.944463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.944470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.944900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.944907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.945324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.945357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.945787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.945795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.946264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.946272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.946681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.946688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.947126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.947133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.947545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.947551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.947865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.947871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.948288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.948316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.948779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.948787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.949047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.949055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.949369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.949377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.949807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.949813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.950112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.950120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.950548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.950555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.950867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.950875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.951385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.951413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.951811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.434 [2024-07-15 21:45:21.951820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.434 qpair failed and we were unable to recover it. 00:29:32.434 [2024-07-15 21:45:21.952252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.952259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.952644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.952650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.952965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.952972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.953247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.953254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.953639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.953646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.954073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.954080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.954498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.954504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.954932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.954938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.955434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.955461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.955859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.955867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.956369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.956397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.956827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.956835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.957224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.957231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.957660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.957667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.958072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.958079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.958516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.958523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.958724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.958735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.959087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.959094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.959492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.959499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.959919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.959926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.960444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.960472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.960880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.960888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.961295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.961322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.961750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.961761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.962154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.962162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.962569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.962576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.962964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.962971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.963401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.963408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.963724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.963730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.964146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.964153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.964537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.964544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.964947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.964954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.965377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.965384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.965765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.965771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.966158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.966165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.966595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.966602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.967025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.967033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.967461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.967467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.967892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.967899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.968329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.968336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.435 qpair failed and we were unable to recover it. 00:29:32.435 [2024-07-15 21:45:21.968636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.435 [2024-07-15 21:45:21.968642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.969030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.969037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.969433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.969439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.969866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.969872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.970266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.970273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.970680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.970686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.971077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.971083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.971478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.971485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.971867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.971874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.972270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.972277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.972668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.972674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.973101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.973107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.973495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.973502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.973705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.973714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.974120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.974131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.974550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.974556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.974864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.974871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.975394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.975422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.975825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.975833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.976263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.976270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.976680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.976687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.976839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.976847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.977238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.977245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.977641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.977651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.978040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.978047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.978450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.978457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.978863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.978870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.979292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.979299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.979726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.979733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.980127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.980134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.980542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.980548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.980970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.980977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.981520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.981547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.981949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.981957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.982459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.982486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.982924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.982932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.436 qpair failed and we were unable to recover it. 00:29:32.436 [2024-07-15 21:45:21.983141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.436 [2024-07-15 21:45:21.983151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.983576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.983584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.983994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.984001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.984511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.984538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.984938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.984946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.985304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.985331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.985767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.985775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.986285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.986311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.986769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.986778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.987227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.987235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.987655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.987661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.988071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.988078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.988470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.988477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.988859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.988866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.989288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.989295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.989502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.989509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.989938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.989944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.990403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.990409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.990795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.990801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.991186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.991194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.991588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.991594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.992020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.992026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.992421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.992428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.992631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.992641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.993039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.993045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.993516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.993523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.993898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.993905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.994329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.994338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.994642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.994649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.995052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.995058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.995515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.995522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.995947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.995954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.996178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.996192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.996603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.996611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.997013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.997019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.997472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.997479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.997870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.997876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.998258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.998265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.998674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.998681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.999105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.999112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.999546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.437 [2024-07-15 21:45:21.999553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.437 qpair failed and we were unable to recover it. 00:29:32.437 [2024-07-15 21:45:21.999937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:21.999943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.000463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.000491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.000891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.000899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.001395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.001423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.001826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.001835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.002345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.002372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.002772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.002780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.003169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.003177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.003597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.003604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.003990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.003997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.004394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.004401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.004831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.004837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.005386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.005414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.005815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.005824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.006221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.006229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.006557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.006563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.006973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.006980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.007292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.007299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.007701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.007708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.008137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.008145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.008541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.008547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.008972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.008979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.009411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.009417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.009707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.009714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.010110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.010117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.010504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.010510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.010896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.010904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.011377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.438 [2024-07-15 21:45:22.011405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.438 qpair failed and we were unable to recover it. 00:29:32.438 [2024-07-15 21:45:22.011809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.011817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.012130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.012137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.012575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.012581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.012971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.012977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.013485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.013512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.013922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.013930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.014419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.014447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.014846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.014854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.015392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.015419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.015634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.015644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.015874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.015881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.016179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.016186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.016597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.016604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.016994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.017001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.017431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.017438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.017891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.017897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.018384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.018412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.018893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.018901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.019396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.019424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.019851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.019860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.020392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.020419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.020827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.020836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.021318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.021345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.021775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.021783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.022220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.022228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.022518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.022526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.022923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.022929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.023194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.023203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.023661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.023667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.024059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.024065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.024466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.024473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.024860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.024866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.025302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.025309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.025698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.025705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.025907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.025915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.026337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.026344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.026742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.026748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.027049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.027056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.027479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.027489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.027919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.027925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.028294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.439 [2024-07-15 21:45:22.028302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.439 qpair failed and we were unable to recover it. 00:29:32.439 [2024-07-15 21:45:22.028604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.028611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.028998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.029004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.029390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.029397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.029806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.029813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.030323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.030351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.030565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.030574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.030978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.030984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.031374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.031381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.031668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.031675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.032070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.032077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.032479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.032486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.032872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.032878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.033267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.033274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.033700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.033708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.034142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.034150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.034553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.034559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.034942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.034948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.035331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.035337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.035740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.035746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.036161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.036168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.036582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.036589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.036976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.036982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.037368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.037375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.037783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.037789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.038187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.038194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.038630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.038637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.039062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.039069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.039482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.039489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.039913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.039920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.040341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.040348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.040735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.040742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.041166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.041172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.041471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.041477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.041851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.041858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.042243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.042250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.042679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.042685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.043078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.043085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.043438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.043447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.043852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.043858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.044161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.044175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.440 [2024-07-15 21:45:22.044582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.440 [2024-07-15 21:45:22.044589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.440 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.044978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.044984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.045373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.045380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.045808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.045814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.046202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.046209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.046430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.046439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.046865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.046872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.047277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.047284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.047693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.047699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.048089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.048096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.048483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.048490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.048912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.048918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.049409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.049437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.049872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.049880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.050435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.050462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.050667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.050676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.051105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.051112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.051474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.051482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.051911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.051917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.052181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.052188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.052571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.052577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.052982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.052988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.053279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.053286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.053769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.053775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.054174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.054181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.054604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.054611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.055036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.055043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.055442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.055448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.055871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.055878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.056210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.056217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.056636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.056642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.057023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.057030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.057415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.057423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.057825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.057831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.058212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.058219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.058533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.058539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.058967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.441 [2024-07-15 21:45:22.058973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.441 qpair failed and we were unable to recover it. 00:29:32.441 [2024-07-15 21:45:22.059361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.059370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.059797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.059803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.060194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.060201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.060605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.060612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.060915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.060922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.061342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.061349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.061732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.061739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.062163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.062170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.062478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.062485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.062699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.062708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.063002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.063009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.063435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.063442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.063871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.063878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.064311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.064318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.064619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.064626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.065038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.065045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.065448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.065454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.065841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.065847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.066241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.066248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.066639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.066645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.067029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.067036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.067431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.067438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.067749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.067757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.068170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.068177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.068474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.068480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.068892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.068899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.069307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.069314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.069730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.069737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.070026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.070034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.070341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.070348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.070785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.070792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.071256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.071264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.071669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.071677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.072106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.072113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.072510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.072517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.072901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.072907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.073332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.073339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.073722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.073728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.074116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.074124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.074217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.074225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.442 [2024-07-15 21:45:22.074589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.442 [2024-07-15 21:45:22.074597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.442 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.075016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.075023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.075436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.075443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.075836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.075843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.076270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.076277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.076668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.076674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.077058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.077064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.077460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.077467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.077869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.077876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.078289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.078295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.078707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.078713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.079096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.079103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.079508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.079514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.079901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.079908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.080428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.080455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.080858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.080866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.081365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.081393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.081796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.081804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.082006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.082015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.082441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.082448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.082832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.082838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.083244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.083252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.083683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.083689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.084116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.084130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.084513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.084519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.084941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.084948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.085466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.085494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.085894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.085903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.086425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.086452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.086856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.086864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.087339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.087367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.087796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.087804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.088199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.088206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.088635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.088641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.089033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.089041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.443 qpair failed and we were unable to recover it. 00:29:32.443 [2024-07-15 21:45:22.089339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.443 [2024-07-15 21:45:22.089346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.089768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.089775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.090182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.090189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.090582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.090589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.091022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.091028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.091432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.091442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.091844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.091851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.092252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.092260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.092688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.092695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.093077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.093083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.093476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.093483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.093907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.093913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.094207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.094214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.094617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.094623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.095050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.095057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.095455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.095462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.095715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.095723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.096127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.096134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.096592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.096599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.096987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.096994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.097503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.097530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.097921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.097929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.098409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.098436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.098844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.098853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.099337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.099364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.099573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.099582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.100008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.100015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.100340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.100348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.100757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.100764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.101145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.101152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.101598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.101605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.101979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.101986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.102295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.102302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.102716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.102722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.103145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.103153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.103564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.103571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.103993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.104000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.104414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.104421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.104803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.104809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.105195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.105201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.105657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.444 [2024-07-15 21:45:22.105664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.444 qpair failed and we were unable to recover it. 00:29:32.444 [2024-07-15 21:45:22.105980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.105987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.106403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.106410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.106798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.106804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.107336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.107363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.107766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.107779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.108088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.108094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.108523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.108530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.108736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.108746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.109162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.109169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.109480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.109487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.109928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.109934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.110359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.110365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.110748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.110756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.111158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.111166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.111596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.111602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.111990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.111996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.112430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.112437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.112708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.112715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.113127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.113134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.113534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.113541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.113993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.113999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.114477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.114505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.114940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.114948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.115451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.115478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.115876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.115885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.116414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.116441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.116903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.116912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.117479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.117506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.117905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.117914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.118407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.118435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.118873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.118881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.119432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.119460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.119858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.119866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.120384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.120411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.120881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.120889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.121406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.121433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.121840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.121848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.122403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.122431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.122874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.122882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.123426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.123453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.123853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.445 [2024-07-15 21:45:22.123861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.445 qpair failed and we were unable to recover it. 00:29:32.445 [2024-07-15 21:45:22.124361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.124388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.124820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.124828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.125215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.125223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.125726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.125736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.126119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.126136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.126568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.126575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.127022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.127029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.127291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.127299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.127707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.127714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.128135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.128142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.128546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.128552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.128948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.128955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.129155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.129164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.129555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.129563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.129983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.129990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.130472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.130500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.130899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.130908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.131402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.131429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.131860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.131868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.132376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.132403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.132808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.132817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.133326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.133353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.133763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.133771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.134156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.134163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.134572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.134578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.135041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.135048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.135330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.135337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.135765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.135771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.135982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.135991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.136387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.136394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.136778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.136785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.137211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.137218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.137642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.137649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.138033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.138040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.138424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.138432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.138637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.138644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.138949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.138955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.139394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.139401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.139788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.139794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.140144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.140151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.446 qpair failed and we were unable to recover it. 00:29:32.446 [2024-07-15 21:45:22.140470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.446 [2024-07-15 21:45:22.140477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.140860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.140867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.141253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.141260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.141682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.141688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.141857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.141866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.142153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.142160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.142513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.142519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.142942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.142949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.143352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.143359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.143763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.143770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.144185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.144192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.144575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.144581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.144972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.144978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.145364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.145371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.145719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.145726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.146131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.146138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.146530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.146536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.146964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.146972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.147286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.147293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.147594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.147601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.148013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.148020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.148434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.148441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.148827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.148833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.149236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.149243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.149690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.149697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.150117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.150129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.150548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.150555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.150958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.150966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.151496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.151523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.151923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.151931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.152421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.152452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.152657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.152666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.153092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.153099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.153480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.153487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.153904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.153910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.447 [2024-07-15 21:45:22.154425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.447 [2024-07-15 21:45:22.154453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.447 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.154861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.154869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.155400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.155427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.155851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.155858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.156331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.156358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.156759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.156768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.157198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.157206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.157386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.157395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.157812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.157819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.158237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.158244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.158734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.158741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.159133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.159141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.159542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.159549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.159975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.159982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.160278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.160286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.160712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.160719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.161102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.161109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.161495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.161502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.161969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.161975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.162407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.162435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.162853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.162861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.163343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.163370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.163776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.163786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.164220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.164227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.164554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.164569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.165045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.165052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.165445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.165452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.165742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.165749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.166154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.166161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.166558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.166564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.166961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.166968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.167400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.167406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.167792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.167799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.168184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.168191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.168582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.168590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.169014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.169022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.169317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.169324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.169735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.169741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.170126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.170134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.170524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.170531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.170959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.448 [2024-07-15 21:45:22.170966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.448 qpair failed and we were unable to recover it. 00:29:32.448 [2024-07-15 21:45:22.171438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.171465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.171866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.171875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.172363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.172390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.172702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.172711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.173127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.173135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.173553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.173560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.173951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.173958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.174452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.174479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.174879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.174887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.175423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.175451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.175850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.175859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.176388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.176415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.176816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.176824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.177381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.177408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.177836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.177845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.178277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.178284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.178688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.178694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.179088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.179095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.179539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.179546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.179948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.179955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.180367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.180394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.180825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.180833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.181321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.181348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.181752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.181760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.182148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.182155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.182471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.182478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.182934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.182942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.183357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.183363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.183665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.183672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.184109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.184117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.184546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.184553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.184987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.184993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.185542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.185569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.185969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.185977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.186379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.186410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.186861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.186869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.187397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.187424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.187739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.187747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.188250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.188257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.188659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.188665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.189082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.449 [2024-07-15 21:45:22.189089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.449 qpair failed and we were unable to recover it. 00:29:32.449 [2024-07-15 21:45:22.189503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.189510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.189949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.189956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.190450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.190478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.190906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.190916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.191450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.191478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.191912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.191921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.192369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.192397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.192703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.192712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.193098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.193106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.193369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.193377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.193749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.193757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.194162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.194171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.194587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.194595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.194883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.194891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.195285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.195292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.195703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.195710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.196137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.196144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.196611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.196618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.197028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.197035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.197430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.197437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.197859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.197867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.198177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.198185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.198599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.198606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.199001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.199009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.199215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.199226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.199595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.199603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.200019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.200026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.200288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.200297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.200721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.200729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.201140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.201147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.201535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.201541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.201926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.201933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.202355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.202361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.202768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.202778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.203217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.203224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.203622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.203629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.204055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.204062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.204466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.204473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.204671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.204679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.205111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.205118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.205514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.450 [2024-07-15 21:45:22.205520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.450 qpair failed and we were unable to recover it. 00:29:32.450 [2024-07-15 21:45:22.205909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.205916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.206364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.206371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.206751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.206758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.207146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.207153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.207558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.207565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.207971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.207978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.208425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.208431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.208864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.208870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.209362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.209389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.209794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.209802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.210211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.210218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.210427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.210435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.210726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.210733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.211141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.211148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.211523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.211530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.211943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.211950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.212354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.212360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.212742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.212749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.213133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.213141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.213517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.213524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.213955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.213961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.214273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.214280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.214697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.214704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.215091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.215099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.215511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.215520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.215791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.215799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.216131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.216138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.216573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.216581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.216968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.216975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.217302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.217310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.217716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.217723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.218170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.218177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.218582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.218592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.218977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.218984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.219372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.219380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.451 [2024-07-15 21:45:22.219804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.451 [2024-07-15 21:45:22.219812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.451 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.220240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.220248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.220645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.220652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.221083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.221090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.221519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.221526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.221891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.221898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.222102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.222110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.222534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.222542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.222965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.222971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.223458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.223485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.223889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.223897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.224398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.224426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.224854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.224862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.225391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.225419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.225820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.225828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.226376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.226403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.226802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.226811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.227075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.227084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.227485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.227492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.227687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.227695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.228108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.228115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.228540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.228547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.452 [2024-07-15 21:45:22.228975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.452 [2024-07-15 21:45:22.228982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.452 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.229394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.229422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.229831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.229839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.230355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.230383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.230807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.230815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.231228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.231236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.231626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.231634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.232023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.232030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.232447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.232455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.232886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.232892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.233285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.233292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.233640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.233647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.234061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.234067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.234464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.234471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.234856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.234863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.235248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.235259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.235532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.235540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.235942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.235949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.236381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.236389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.236810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.236817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.237157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.237164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.237532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.237538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.237841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.237848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.238234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.238241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.238654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.238660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.239080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.239087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.239499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.239506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.240092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.240103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.240489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.240497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.240921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.240927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.241312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.241319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.723 qpair failed and we were unable to recover it. 00:29:32.723 [2024-07-15 21:45:22.241700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.723 [2024-07-15 21:45:22.241708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.242114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.242121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.242513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.242520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.242905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.242912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.243431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.243459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.243784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.243793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.244215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.244222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.244617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.244623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.245013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.245019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.245418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.245425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.245847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.245853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.246242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.246250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.246679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.246686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.247082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.247089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.247495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.247503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.247794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.247801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.248040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.248047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.248423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.248430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.248822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.248829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.249303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.249310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.249699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.249706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.250090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.250097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.250516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.250523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.250985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.250993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.251479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.251510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.251913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.251922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.252431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.252459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.252860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.252869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.253393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.253421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.253733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.253742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.254154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.254162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.254618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.254626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.254916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.254924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.255330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.255338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.255743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.255750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.256174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.256181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.724 [2024-07-15 21:45:22.256562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.724 [2024-07-15 21:45:22.256570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.724 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.256977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.256984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.257298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.257306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.257714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.257722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.258157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.258165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.258573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.258581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.258986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.258993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.259397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.259405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.259817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.259825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.260225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.260233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.260659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.260667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.261092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.261100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.261500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.261509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.261909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.261916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.262409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.262437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.262864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.262873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.263400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.263428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.263845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.263854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.264352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.264381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.264797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.264809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.265200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.265207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.265621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.265629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.266033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.266040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.266443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.266450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.266875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.266882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.267194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.267201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.267606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.267613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.267922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.267929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.268324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.268335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.268739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.268747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.268956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.268966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.269371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.269379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.269806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.269814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.270218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.270225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.270599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.270607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.271032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.271039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.271476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.271484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.271781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.271789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.725 [2024-07-15 21:45:22.272182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.725 [2024-07-15 21:45:22.272190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.725 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.272579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.272587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.272983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.272991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.273400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.273408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.273818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.273826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.274261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.274269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.274694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.274701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.274903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.274911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.275337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.275344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.275769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.275776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.276162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.276169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.276630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.276637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.277043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.277051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.277449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.277457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.277879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.277887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.278335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.278342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.278719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.278725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.279152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.279160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.279440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.279447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.279857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.279865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.280192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.280199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.280631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.280637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.281073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.281079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.281472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.281479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.281872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.281879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.282147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.282155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.282565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.282572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.282773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.282781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.283161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.283168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.283578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.283585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.284012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.284021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.284415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.284423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.284803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.284810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.285110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.285116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.285534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.285541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.285925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.285932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.286360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.286367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.726 qpair failed and we were unable to recover it. 00:29:32.726 [2024-07-15 21:45:22.286763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.726 [2024-07-15 21:45:22.286771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.287163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.287171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.287569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.287576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.287963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.287970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.288397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.288404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.288827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.288834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.289339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.289366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.289767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.289775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.290173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.290181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.290592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.290599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.290981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.290988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.291376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.291383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.291817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.291825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.292315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.292342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.292761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.292769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.293160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.293167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.293537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.293545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.293943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.293950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.294396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.294403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.294791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.294798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.295227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.295235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.295622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.295629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.296035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.296042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.296449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.296456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.296884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.296892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.297278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.297285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.297696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.297702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.297990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.298002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.298389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.298396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.298782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.298789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.299345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.299372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.299780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.727 [2024-07-15 21:45:22.299788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.727 qpair failed and we were unable to recover it. 00:29:32.727 [2024-07-15 21:45:22.300174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.300182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.300593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.300605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.300986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.300993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.301290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.301298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.301704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.301711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.302143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.302150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.302558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.302564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.302951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.302958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.303355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.303363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.303754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.303761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.304160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.304167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.304569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.304575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.304960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.304967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.305396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.305403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.305792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.305798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.306224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.306232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.306636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.306643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.307032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.307039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.307441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.307449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.307717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.307725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.308148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.308157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.308562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.308569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.308994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.309001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.309401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.309408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.309808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.309814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.310106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.310113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.310559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.310566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.310966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.310973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.311392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.311420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.311860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.311869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.312374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.312402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.728 [2024-07-15 21:45:22.312856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.728 [2024-07-15 21:45:22.312864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.728 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.313355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.313382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.313839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.313847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.314303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.314331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.314786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.314794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.315201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.315208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.315661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.315668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.316105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.316112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.316493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.316500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.316893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.316900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.317387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.317418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.317875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.317883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.318402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.318429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.318870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.318878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.319369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.319397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.319858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.319867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.320407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.320434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.320741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.320750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.321166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.321173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.321652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.321659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.322065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.322071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.322472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.322479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.322911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.322917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.323317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.323323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.323591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.323597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.323775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.323783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.324162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.324169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.324562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.324568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.324963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.324969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.325383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.325390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.325748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.325754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.326154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.326160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.326574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.326580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.326783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.326790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.327149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.327157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.327584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.327591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.327966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.327973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.328381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.328388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.328800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.729 [2024-07-15 21:45:22.328807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.729 qpair failed and we were unable to recover it. 00:29:32.729 [2024-07-15 21:45:22.329263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.329270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.329689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.329697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.330022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.330029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.330417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.330424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.330720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.330727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.331138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.331145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.331519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.331527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.331880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.331888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.332274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.332281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.332685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.332693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.333095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.333102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.333528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.333539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.333923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.333929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.334360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.334367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.334798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.334805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.335194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.335201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.335613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.335619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.336040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.336047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.336462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.336469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.336900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.336907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.337312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.337319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.337708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.337715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.338139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.338146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.338336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.338344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.338765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.338773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.339203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.339211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.339595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.339601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.340053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.340059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.340446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.340453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.340856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.340862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.341336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.341343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.341731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.341738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.342145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.342153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.342562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.342568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.342973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.342979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.343279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.343329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.343755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.343762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.344225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.730 [2024-07-15 21:45:22.344232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.730 qpair failed and we were unable to recover it. 00:29:32.730 [2024-07-15 21:45:22.344641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.344648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.345042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.345048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.345466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.345474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.345881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.345888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.346275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.346282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.346707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.346713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.347038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.347045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.347501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.347508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.347894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.347900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.348177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.348184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.348604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.348610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.349012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.349018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.349439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.349446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.349868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.349874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.350266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.350273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.350683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.350690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.351066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.351073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.351472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.351479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.351905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.351912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.352355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.352383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.352808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.352817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.353222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.353229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.353657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.353664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.353976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.353982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.354189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.354199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.354660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.354666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.355107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.355113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.355431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.355439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.355867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.355874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.356363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.356391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.356792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.356800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.357187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.357195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.357619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.357626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.358058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.358065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.358405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.358412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.358813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.358820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.359242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.359250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.359657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.359664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.360051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.360058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.360397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.360404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.360828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.731 [2024-07-15 21:45:22.360838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.731 qpair failed and we were unable to recover it. 00:29:32.731 [2024-07-15 21:45:22.361222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.361229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.361666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.361672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.362163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.362170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.362488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.362496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.362910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.362916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.363254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.363262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.363665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.363672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.364060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.364066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.364455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.364462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.364857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.364863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.365275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.365283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.365682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.365688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.366072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.366078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.366479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.366486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.366979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.366986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.367537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.367565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.367967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.367975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.368495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.368523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.368872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.368880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.369390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.369417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.369719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.369727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.370139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.370147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.370548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.370555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.370996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.371002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.371395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.371403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.371879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.371886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.372365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.372392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.372823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.372831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.373376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.373404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.373807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.373816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.374246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.374253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.374661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.374668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.374869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.374878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.375289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.375297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.375683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.732 [2024-07-15 21:45:22.375690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.732 qpair failed and we were unable to recover it. 00:29:32.732 [2024-07-15 21:45:22.376004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.376011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.376409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.376417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.376821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.376828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.377232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.377239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.377651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.377661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.378093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.378100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.378488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.378495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.378888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.378894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.379283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.379290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.379693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.379700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.380141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.380148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.380438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.380444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.380875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.380881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.381267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.381274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.381684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.381692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.382120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.382130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.382554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.382561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.382868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.382875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.383333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.383340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.383719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.383725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.384016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.384024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.384405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.384412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.384564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.384573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.384949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.384955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.385351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.385358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.385783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.385791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.386219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.386226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.386537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.386543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.386967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.386973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.387375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.387382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.387792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.387798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.388192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.388200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.388619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.388626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.389053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.389059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.389501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.389508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.389929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.389935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.390428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.390456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.390857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.390865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.391353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.391361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.391750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.391757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.392336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.733 [2024-07-15 21:45:22.392364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.733 qpair failed and we were unable to recover it. 00:29:32.733 [2024-07-15 21:45:22.392765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.392774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.393189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.393196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.393634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.393641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.394030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.394040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.394405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.394412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.394832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.394839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.395247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.395254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.395672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.395679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.396069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.396076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.396471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.396479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.396885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.396891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.397291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.397298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.397675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.397682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.398153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.398161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.398371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.398381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.398781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.398788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.399213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.399220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.399602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.399610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.400014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.400021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.400433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.400441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.400864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.400872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.401281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.401287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.401682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.401689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.402078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.402085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.402481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.402488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.402868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.402875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.403265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.403272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.403471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.403480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.403903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.403910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.404334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.404343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.404750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.404758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.405181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.405188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.405579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.405586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.406010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.406017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.406397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.406406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.406829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.406836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.407226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.407233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.407647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.407653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.408023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.408029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.408437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.734 [2024-07-15 21:45:22.408444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.734 qpair failed and we were unable to recover it. 00:29:32.734 [2024-07-15 21:45:22.408827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.408834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.409226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.409232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.409654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.409661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.410054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.410063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.410456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.410464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.410850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.410857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.411239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.411246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.411674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.411681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.411984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.411991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.412391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.412398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.412789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.412797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.413221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.413228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.413617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.413625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.414030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.414036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.414445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.414452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.414838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.414845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.415168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.415176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.415585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.415591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.415796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.415805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.416218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.416225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.416616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.416624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.417026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.417033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.417427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.417434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.417748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.417755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.418142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.418149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.418334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.418342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.418627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.418635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.419061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.419068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.419479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.419486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.419876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.419882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.420314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.420321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.420744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.420751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.421212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.421219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.421615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.421621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.421904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.421911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.422337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.422345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.422769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.422775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.423165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.423172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.423623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.423629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.424015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.424021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.424441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.735 [2024-07-15 21:45:22.424448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.735 qpair failed and we were unable to recover it. 00:29:32.735 [2024-07-15 21:45:22.424762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.424769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.425197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.425205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.425629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.425639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.425831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.425839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.426245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.426252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.426640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.426646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.427064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.427070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.427477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.427485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.427887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.427893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.428279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.428286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.428704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.428711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.429135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.429142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.429557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.429563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.429968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.429975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.430403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.430409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.430871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.430877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.431261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.431268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.431524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.431531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.431728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.431736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.432128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.432136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.432543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.432549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.432989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.432996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.433512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.433540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.433951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.433959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.434455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.434482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.434886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.434894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.435472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.435500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.435893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.435901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.436403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.436430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.436829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.436838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.437362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.437389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.437793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.437802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.438194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.438201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.736 qpair failed and we were unable to recover it. 00:29:32.736 [2024-07-15 21:45:22.438605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.736 [2024-07-15 21:45:22.438612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.439012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.439018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.439226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.439236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.439658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.439669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.440075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.440083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.440480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.440487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.440788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.440795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.441205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.441212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.441543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.441550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.441932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.441942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.442727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.442742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.443127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.443135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.443421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.443427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.443828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.443836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.444254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.444262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.444453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.444461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.444910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.444917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.445420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.445426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.445760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.445774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.446182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.446189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.446491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.446498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.446874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.446881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.447145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.447153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.447532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.447539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.447921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.447927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.448329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.448337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.448733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.448740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.449135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.449142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.449548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.449555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.449995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.450002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.450394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.450420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.450886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.450895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.451391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.451419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.451824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.451832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.452367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.452394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.452798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.452806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.453247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.453255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.453666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.453673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.454093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.454100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.454513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.454521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.454911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.737 [2024-07-15 21:45:22.454917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.737 qpair failed and we were unable to recover it. 00:29:32.737 [2024-07-15 21:45:22.455445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.455472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.455873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.455881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.456365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.456392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.456791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.456799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.457231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.457239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.457654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.457662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.457961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.457968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.458380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.458386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.458593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.458606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.459035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.459042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.459457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.459464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.459890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.459897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.460290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.460296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.460691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.460699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.461117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.461140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.461551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.461558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.461991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.461997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.462546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.462574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.462884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.462893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.463396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.463423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.463826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.463834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.464361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.464388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.464789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.464797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.465183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.465191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.465597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.465604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.465995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.466003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.466392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.466399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.466805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.466812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.467346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.467373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.467583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.467593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.468025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.468032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.468460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.468467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.468774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.468782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.469076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.469083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.469519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.469526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.469958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.469965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.470490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.470517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.470924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.470932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.471419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.471446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.471851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.471859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.472401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.738 [2024-07-15 21:45:22.472429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.738 qpair failed and we were unable to recover it. 00:29:32.738 [2024-07-15 21:45:22.472831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.472839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.473248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.473255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.473728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.473735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.474129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.474136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.474344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.474353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.474732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.474740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.475116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.475126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.475592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.475603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.476027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.476034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.476448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.476455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.476887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.476894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.477419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.477447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.477881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.477889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.478407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.478434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.478847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.478855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.479358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.479391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.479787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.479796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.480213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.480220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.480671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.480678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.481109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.481115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.481548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.481555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.481837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.481845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.482374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.482402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.482837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.482845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.483278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.483285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.483691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.483698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.484084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.484091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.484490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.484497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.484928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.484935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.485437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.485465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.485880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.485889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.486120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.486133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.486533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.486540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.486866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.486874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.487299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.487327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.487725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.487733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.488130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.488137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.488550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.488557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.488947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.488953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.489361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.489388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.739 [2024-07-15 21:45:22.489860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.739 [2024-07-15 21:45:22.489869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.739 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.490372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.490399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.490799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.490807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.491343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.491369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.491796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.491804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.492195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.492203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.492518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.492525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.492954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.492965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.493398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.493405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.493841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.493848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.494369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.494396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.494797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.494805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.495205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.495213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.495632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.495639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.496026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.496032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.496451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.496458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.496769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.496775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.497203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.497211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.497723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.497730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.498127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.498135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.498542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.498549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.498915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.498922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.499418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.499445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.499848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.499857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.500361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.500389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.500804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.500813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.501226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.501233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.501618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.501625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.502008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.502014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.502414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.502421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.502813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.502820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.503283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.503290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.503681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.503688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.504113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.504120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.504514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.504521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.504925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.504932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.505323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.505350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.505750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.505757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.506161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.506169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.506573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.506580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.740 [2024-07-15 21:45:22.506967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.740 [2024-07-15 21:45:22.506973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.740 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.507357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.507364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.507740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.507746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.508154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.508161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.508632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.508639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.508935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.508942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.509347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.509354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.509783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.509793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.510177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.510185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.510364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.510374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.510794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.510800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.511229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.511236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.511658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.511664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.512049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.512055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.512469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.512477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.512877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.512884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.513307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.513315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.513735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.513742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.514128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.514136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.514555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.514561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.514992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.514998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.515523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.515551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.515952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.515960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.516398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.516425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.516854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.516863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 21:45:22.517391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.741 [2024-07-15 21:45:22.517419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:32.741 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.517841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.517852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.518416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.518444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.518860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.518868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.519137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.519147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.519543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.519551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.519939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.519945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.520358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.520365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.520755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.520762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.521155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.521162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.521563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.521570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.521975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.521983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.522374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.522382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.522770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.522777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.523251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.523258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.523638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.523645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.524079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.524086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.524489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.524496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.524883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.524889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.011 qpair failed and we were unable to recover it. 00:29:33.011 [2024-07-15 21:45:22.525396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.011 [2024-07-15 21:45:22.525424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.525822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.525830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.526224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.526232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.526624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.526634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.527042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.527049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.527454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.527462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.527679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.527689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.528071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.528078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.528485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.528492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.528897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.528904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.529201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.529207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.529613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.529619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.530039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.530045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.530429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.530436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.530863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.530869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.531166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.531173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.531573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.531579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.531878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.531886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.532349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.532355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.532761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.532768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.533191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.533198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.533588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.533594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.533979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.533985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.534298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.534305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.534719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.534725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.535207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.535213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.535514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.535521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.535927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.535933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.536356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.536363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.536787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.536794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.537217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.537224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.537649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.537655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.538097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.538103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.538500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.538508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.538897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.538904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.539077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.539086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.539385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.539393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.539806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.539813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.540241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.540248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.540638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.540644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.540942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.012 [2024-07-15 21:45:22.540949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.012 qpair failed and we were unable to recover it. 00:29:33.012 [2024-07-15 21:45:22.541418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.541425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.541800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.541807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.542192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.542202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.542651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.542658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.542918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.542925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.543341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.543348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.543765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.543772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.544156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.544163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.544648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.544655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.545045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.545051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.545314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.545321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.545634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.545641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.546032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.546039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.546440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.546447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.546852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.546859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.547242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.547249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.547532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.547538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.547936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.547944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.548347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.548354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.548781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.548788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.549155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.549162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.549600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.549606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.549987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.549993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.550394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.550401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.550777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.550784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.551206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.551214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.551640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.551647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.552103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.552109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.552402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.552410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.552801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.552808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.553190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.553197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.553586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.553592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.553975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.553981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.554379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.554386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.554694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.554702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.555104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.555112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.555516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.555523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.555843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.555849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.556375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.556402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.556613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.556623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.557031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.013 [2024-07-15 21:45:22.557039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.013 qpair failed and we were unable to recover it. 00:29:33.013 [2024-07-15 21:45:22.557438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.557445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.557872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.557879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.558188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.558196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.558596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.558602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.558989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.558995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.559389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.559396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.559794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.559800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.559974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.559982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.560410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.560417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.560841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.560848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.561150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.561158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.561574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.561581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.561964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.561970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.562360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.562367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.562632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.562639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.563052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.563059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.563452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.563459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.563892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.563899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.564302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.564309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.564698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.564705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.565131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.565138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.565622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.565628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.565980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.565986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.566287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.566294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.566590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.566597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.567022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.567029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.567434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.567441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.567866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.567872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.568264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.568273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.568697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.568704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.569108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.569116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.569543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.569551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.569978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.569985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.570487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.570514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.570919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.570928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.571487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.571515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.571894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.571902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.572438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.572466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.572869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.572878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.573405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.573433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.014 [2024-07-15 21:45:22.573866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.014 [2024-07-15 21:45:22.573874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.014 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.574399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.574426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.574821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.574829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.575217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.575226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.575623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.575629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.576027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.576033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.576437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.576444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.576840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.576847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.577239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.577246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.577677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.577684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.578092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.578099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.578399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.578407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.578829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.578835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.579219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.579226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.579609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.579615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.579996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.580003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.580481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.580488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.580877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.580884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.581388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.581415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.581823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.581831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.582300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.582328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.582760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.582769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.583185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.583193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.583590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.583597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.584027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.584034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.584445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.584451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.584664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.584673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.585066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.585073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.585465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.585475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.585860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.585867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.586258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.586265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.586465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.586473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.586889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.586895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.587281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.587288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.015 qpair failed and we were unable to recover it. 00:29:33.015 [2024-07-15 21:45:22.587695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.015 [2024-07-15 21:45:22.587702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.588104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.588111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.588516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.588523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.588904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.588911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.589405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.589413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.589790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.589798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.590096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.590103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.590515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.590521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.590907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.590915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.591401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.591429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.591842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.591850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.592354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.592381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.592783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.592791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.593222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.593229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.593632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.593639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.594047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.594055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.594457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.594464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.594848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.594855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.595245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.595252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.595676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.595683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.596147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.596154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.596567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.596573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.596997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.597004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.597409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.597416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.597842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.597850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.598337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.598365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.598577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.598586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.598955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.598962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.599384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.599392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.599803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.599809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.600198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.600204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.600609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.600615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.601026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.601033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.601408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.601416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.601840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.601850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.602232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.602239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.602657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.602663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.603074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.603080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.603468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.603475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.603922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.603930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.016 qpair failed and we were unable to recover it. 00:29:33.016 [2024-07-15 21:45:22.604427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.016 [2024-07-15 21:45:22.604455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.604856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.604865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.605292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.605300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.605687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.605694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.606086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.606093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.606491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.606498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.606885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.606892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.607396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.607423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.607840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.607849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.608362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.608390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.608791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.608800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.609188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.609196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.609500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.609507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.609923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.609929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.610324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.610332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.610755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.610761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.611180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.611187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.611460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.611468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.611894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.611901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.612285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.612292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.612703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.612709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.613143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.613150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.613557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.613563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.614003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.614010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.614280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.614287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.614692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.614698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.615127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.615135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.615510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.615517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.615921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.615928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.616311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.616318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.616740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.616747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.617135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.617141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.617558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.617564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.617971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.617978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.618381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.618390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.618853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.618859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.619368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.619396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.619797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.619805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.620230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.620237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.620682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.620689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.621077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.621083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.017 qpair failed and we were unable to recover it. 00:29:33.017 [2024-07-15 21:45:22.621478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.017 [2024-07-15 21:45:22.621485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.621870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.621877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.622406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.622434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.622849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.622859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.623259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.623267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.623698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.623705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.624130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.624137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.624549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.624556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.624948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.624955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.625457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.625484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.625886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.625894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.626409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.626436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.626844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.626853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.627378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.627406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.627721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.627729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.628171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.628178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.628580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.628586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.628974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.628982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.629398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.629406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.629813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.629820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.630326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.630353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.630734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.630743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.631059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.631066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.631477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.631484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.631877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.631884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.632277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.632285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.632716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.632723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.632928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.632937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.633354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.633360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.633752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.633758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.634022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.634030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.634445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.634452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.634747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.634753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.635022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.635032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.635455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.635462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.635844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.635851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.636239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.636246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.636655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.636663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.637111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.637119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.637544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.637553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.637850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.637856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.018 qpair failed and we were unable to recover it. 00:29:33.018 [2024-07-15 21:45:22.638283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.018 [2024-07-15 21:45:22.638290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.638682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.638688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.639077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.639084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.639389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.639397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.639710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.639717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.640108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.640114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.640501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.640508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.640926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.640933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.641449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.641477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.641813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.641822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.642224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.642232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.642633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.642640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.643068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.643075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.643483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.643490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.643918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.643925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.644417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.644444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.644844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.644852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.645376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.645404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.645812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.645821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.646241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.646249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.646642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.646649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.646924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.646932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.647339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.647346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.647756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.647763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.648072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.648079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.648475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.648483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.648859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.648865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.649259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.649266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.649680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.649686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.650172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.650179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.650559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.650566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.650948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.650955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.019 [2024-07-15 21:45:22.651257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.019 [2024-07-15 21:45:22.651267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.019 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.651705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.651711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.652137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.652144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.652581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.652588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.652993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.653000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.653149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.653159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.653536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.653542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.653918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.653924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.654422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.654449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.654914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.654922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.655434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.655462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.655871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.655879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.656401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.656428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.656904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.656912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.657108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.657117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.657542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.657550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.657844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.657851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.658376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.658404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.658807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.658816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.659209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.659217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.659647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.659653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.660042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.660049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.660457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.660464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.660892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.660899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.661329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.661336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.661719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.661726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.662138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.662145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.662521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.662527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.662960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.662967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.663370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.663377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.663818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.663826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.664274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.664281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.664705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.664712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.665108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.020 [2024-07-15 21:45:22.665114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.020 qpair failed and we were unable to recover it. 00:29:33.020 [2024-07-15 21:45:22.665563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.665570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.665995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.666002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.666533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.666561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.666886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.666895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.667411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.667439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.667921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.667929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.668428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.668459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.668855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.668863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.669357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.669385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.669786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.669794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.670064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.670071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.670486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.670494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.670923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.670930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.671433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.671461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.671861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.671869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.672363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.672391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.672795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.672804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.673190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.673197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.673575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.673582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.673986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.673992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.674446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.674453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.674864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.674871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.675409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.675436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.675846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.675854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.676350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.676377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.676762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.676770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.677078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.677086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.677582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.677589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.677977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.677984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.678480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.678508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.678911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.678920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.679356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.679383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.679843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.679852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.021 [2024-07-15 21:45:22.680368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.021 [2024-07-15 21:45:22.680395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.021 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.680792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.680801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.681286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.681314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.681780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.681790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.682195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.682203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.682638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.682645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.683066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.683074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.683479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.683489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.683903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.683910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.684355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.684382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.684837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.684846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.685355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.685363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.685751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.685758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.686155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.686166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.686607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.686615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.687072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.687078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.687396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.687403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.687822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.687829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.688223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.688230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.688566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.688574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.689048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.689054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.689546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.689554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.690015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.690022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.690334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.690342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.690751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.690757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.691157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.691166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.691567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.691574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.692042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.692050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.692514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.692521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.692925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.692932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.693340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.693348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.693757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.693764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.694209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.694216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.694629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.694635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.695040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.695046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.695480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.695486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.695896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.695903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.696323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.022 [2024-07-15 21:45:22.696330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.022 qpair failed and we were unable to recover it. 00:29:33.022 [2024-07-15 21:45:22.696739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.696745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.697112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.697118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.697542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.697549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.697981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.697988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.698426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.698453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.698911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.698920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.699447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.699474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.699972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.699981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.700499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.700527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.700942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.700950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.701464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.701491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.701819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.701828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.702343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.702370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.702781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.702789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.703375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.703403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.703843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.703854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.704058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.704067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.704378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.704385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.704815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.704822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.705241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.705248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.705685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.705692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.706086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.706093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.706497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.706504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.706886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.706894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.707288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.707296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.707582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.707589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.708003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.708009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.708461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.708468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.708856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.708863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.709277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.709284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.709758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.709765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.710024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.710031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.710444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.710451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.710869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.710876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.711263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.711270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.023 [2024-07-15 21:45:22.711704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.023 [2024-07-15 21:45:22.711711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.023 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.712114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.712125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.712563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.712570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.712955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.712961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.713390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.713417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.713733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.713742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.714120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.714134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.714551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.714559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.714945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.714951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.715148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.715159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.715600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.715607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.715995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.716002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.716543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.716571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.717032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.717040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.717336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.717344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.717803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.717809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.718328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.718356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.718799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.718807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.719240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.719248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.719739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.719746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.720138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.720148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.720462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.720468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.720940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.720947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.721418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.721426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.721817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.721824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.722128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.722136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.722545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.722552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.722869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.722876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.723406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.723434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.723838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.723847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.724127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.724135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.724597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.724604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.725055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.725061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.725606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.725633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.726051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.726060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.024 [2024-07-15 21:45:22.726572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.024 [2024-07-15 21:45:22.726601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.024 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.726922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.726973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.727367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.727395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.727797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.727805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.728107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.728115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.728535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.728542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.728990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.728997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.729425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.729453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.729874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.729883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.730387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.730415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.730819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.730827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.731109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.731117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.731499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.731507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.731914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.731920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.732423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.732450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.732877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.732886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.733393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.733421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.733888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.733896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.734348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.734376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.734787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.734795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.734991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.734999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.735434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.735442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.735734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.735742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.736154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.736161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.736613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.736619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.737012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.737021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.737234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.737243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.737691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.737698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.738109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.738117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.738528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.738535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.738934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.738940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.739318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.739324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.739690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.739696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.740103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.025 [2024-07-15 21:45:22.740109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.025 qpair failed and we were unable to recover it. 00:29:33.025 [2024-07-15 21:45:22.740508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.740515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.740974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.740980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.741388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.741414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.741850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.741859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.742418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.742445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.742827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.742836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.743391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.743418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.743883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.743892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.744358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.744386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.744791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.744800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.745198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.745205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.745594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.745601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.745936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.745943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.746254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.746261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.746671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.746678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.746876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.746883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.747383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.747390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.747776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.747782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.748197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.748204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.748614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.748621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.749037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.749044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.749429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.749436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.749673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.749679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.750079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.750085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.750495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.750502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.750815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.750822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.751228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.751235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.751657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.751663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.752057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.752063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.752462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.752469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.752889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.752896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.026 [2024-07-15 21:45:22.753292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.026 [2024-07-15 21:45:22.753301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.026 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.753690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.753697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.754100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.754107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.754434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.754441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.754859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.754866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.755415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.755443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.755848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.755856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.756250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.756257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.756674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.756681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.757119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.757131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.757519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.757526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.757976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.757983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.758393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.758420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.758865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.758873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.759402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.759430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.759741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.759750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.760163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.760171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.760657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.760663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.761048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.761055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.761444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.761451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.761858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.761865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.762296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.762303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.762708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.762715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.763143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.763151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.763430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.763437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.763813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.763820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.764204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.764211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.764476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.764483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.764890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.764896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.765284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.765291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.765700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.765706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.766112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.766119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.766559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.766566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.766965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.766972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.767399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.767427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.767828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.767836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.768229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.768237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.768661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.768667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.769079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.769085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.769554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.769561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.769879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.027 [2024-07-15 21:45:22.769887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.027 qpair failed and we were unable to recover it. 00:29:33.027 [2024-07-15 21:45:22.770421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.770449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.770868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.770877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.771400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.771427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.771828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.771836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.772225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.772232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.772650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.772657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.773074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.773081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.773470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.773477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.773910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.773917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.774344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.774371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.774774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.774781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.775162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.775170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.775598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.775604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.775988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.775995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.776424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.776431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.776815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.776821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.777331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.777358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.777766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.777774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.778205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.778212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.778521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.778528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.778922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.778928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.779329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.779336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.779741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.779748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.780154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.780162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.780587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.780594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.780978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.780984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.781413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.781423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.781814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.781821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.782277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.782285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.782663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.782670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.783072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.783079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.783528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.783535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.783734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.783743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.784116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.784127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.784448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.784455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.784861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.784867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.785297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.785304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.785688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.785695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.786100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.786107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.028 [2024-07-15 21:45:22.786280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.028 [2024-07-15 21:45:22.786288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.028 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.786666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.786673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.787040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.787047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.787457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.787464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.787848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.787855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.788239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.788246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.788689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.788695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.789082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.789088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.789480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.789486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.789917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.789924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.790434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.790462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.790869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.790878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.791397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.791425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.791738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.791747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.791946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.791955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.792375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.792382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.792769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.792776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.793166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.793173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.793583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.793589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.793997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.794003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.794395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.794402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.794662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.794669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.795074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.795081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.795479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.795486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.795798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.795805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.796226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.796233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.796617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.796623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.796909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.796918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.797322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.797329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.797640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.797646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.798056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.798062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.798452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.798459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.798864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.798871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.799300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.799307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.799692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.029 [2024-07-15 21:45:22.799699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.029 qpair failed and we were unable to recover it. 00:29:33.029 [2024-07-15 21:45:22.800084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.030 [2024-07-15 21:45:22.800092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.030 qpair failed and we were unable to recover it. 00:29:33.030 [2024-07-15 21:45:22.800487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.030 [2024-07-15 21:45:22.800494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.030 qpair failed and we were unable to recover it. 00:29:33.030 [2024-07-15 21:45:22.800923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.030 [2024-07-15 21:45:22.800930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.030 qpair failed and we were unable to recover it. 00:29:33.030 [2024-07-15 21:45:22.801424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.030 [2024-07-15 21:45:22.801451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.030 qpair failed and we were unable to recover it. 00:29:33.030 [2024-07-15 21:45:22.801852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.030 [2024-07-15 21:45:22.801861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.030 qpair failed and we were unable to recover it. 00:29:33.030 [2024-07-15 21:45:22.802377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.030 [2024-07-15 21:45:22.802404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.030 qpair failed and we were unable to recover it. 00:29:33.030 [2024-07-15 21:45:22.802842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.030 [2024-07-15 21:45:22.802850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.030 qpair failed and we were unable to recover it. 00:29:33.030 [2024-07-15 21:45:22.803330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.030 [2024-07-15 21:45:22.803358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.030 qpair failed and we were unable to recover it. 00:29:33.030 [2024-07-15 21:45:22.803817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.030 [2024-07-15 21:45:22.803826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.030 qpair failed and we were unable to recover it. 00:29:33.030 [2024-07-15 21:45:22.804090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.030 [2024-07-15 21:45:22.804098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.030 qpair failed and we were unable to recover it. 00:29:33.030 [2024-07-15 21:45:22.804513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.030 [2024-07-15 21:45:22.804521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.030 qpair failed and we were unable to recover it. 00:29:33.030 [2024-07-15 21:45:22.804910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.030 [2024-07-15 21:45:22.804917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.030 qpair failed and we were unable to recover it. 00:29:33.030 [2024-07-15 21:45:22.805415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.030 [2024-07-15 21:45:22.805442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.030 qpair failed and we were unable to recover it. 00:29:33.030 [2024-07-15 21:45:22.805845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.030 [2024-07-15 21:45:22.805855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.030 qpair failed and we were unable to recover it. 00:29:33.030 [2024-07-15 21:45:22.806385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.030 [2024-07-15 21:45:22.806412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.030 qpair failed and we were unable to recover it. 00:29:33.030 [2024-07-15 21:45:22.806813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.030 [2024-07-15 21:45:22.806821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.030 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.807224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.807233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.807653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.807661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.808089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.808096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.808410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.808417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.808824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.808831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.809223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.809230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.809431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.809440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.809858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.809865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.810329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.810336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.810710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.810717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.811108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.811115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.811587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.811593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.812028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.812035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.812441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.812448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.812873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.812880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.813386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.813413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.813811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.813823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.814217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.814225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.814619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.814625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.815012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.815019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.815446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.815453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.815772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.815780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.816216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.816223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.816529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.816536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.816722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.816732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.817149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.817156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.817511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.817517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.817815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.817822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.818225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.818232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.818496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.818503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.818934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.818941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.819310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.819318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.819720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.819727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.820144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.302 [2024-07-15 21:45:22.820151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.302 qpair failed and we were unable to recover it. 00:29:33.302 [2024-07-15 21:45:22.820522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.820529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.820909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.820916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.821299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.821306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.821616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.821623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.822021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.822028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.822445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.822452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.822880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.822887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.823276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.823283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.823574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.823580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.823982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.823988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.824387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.824395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.824798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.824805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.825230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.825238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.825642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.825650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.826057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.826064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.826454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.826460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.826887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.826894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.827234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.827241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.827644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.827650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.828048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.828055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.828446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.828453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.828768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.828776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.829185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.829194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.829623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.829630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.830087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.830093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.830473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.830480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.830863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.830869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.831263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.831270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.831576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.831583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.831992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.831998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.832398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.832404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.832828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.832835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.833384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.833411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.833810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.833819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.834206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.834214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.834630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.834637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.835014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.835020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.835442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.835449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.835844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.835850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.836239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.836246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.303 [2024-07-15 21:45:22.836650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.303 [2024-07-15 21:45:22.836656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.303 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.836857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.836866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.837226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.837233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.837640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.837647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.838070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.838078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.838479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.838485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.838856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.838862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.839267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.839275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.839669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.839676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.840068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.840075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.840481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.840488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.840868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.840875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.841302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.841309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.841690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.841696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.842027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.842034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.842234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.842243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.842624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.842632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.843036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.843042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.843409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.843416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.843823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.843831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.844255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.844262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.844575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.844582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.844996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.845004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.845401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.845409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.845794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.845801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.846194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.846201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.846603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.846610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.847016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.847022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.847482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.847490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.847892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.847899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.848304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.848312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.848695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.848701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.849131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.849137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.849564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.849570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.849956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.849963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.850346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.850353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.850787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.850794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.851201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.851209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.851662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.851669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.852066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.852074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.852468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.852475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.304 [2024-07-15 21:45:22.852856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.304 [2024-07-15 21:45:22.852862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.304 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.853247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.853254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.853659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.853665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.853835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.853843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.854226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.854234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.854660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.854667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.855072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.855079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.855477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.855484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.855865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.855872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.856252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.856259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.856551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.856557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.856983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.856989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.857400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.857407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.857814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.857821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.858209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.858215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.858606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.858614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.858963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.858971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.859375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.859382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.859769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.859776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.860167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.860174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.860603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.860610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.861017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.861025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.861334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.861341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.861739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.861745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.862042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.862048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.862448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.862455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.862839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.862846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.863267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.863274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.863672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.863679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.864091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.864097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.864395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.864402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.864807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.864814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.865240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.865247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.865553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.865559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.865976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.865983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.866416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.866422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.866847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.866854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.867351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.867379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.867781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.867789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.868178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.868185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.305 [2024-07-15 21:45:22.868596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.305 [2024-07-15 21:45:22.868603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.305 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.868986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.868993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.869426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.869434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.869870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.869877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.870370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.870398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.870857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.870865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.871402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.871430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.871898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.871906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.872390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.872417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.872821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.872830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.873350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.873378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.873815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.873822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.874226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.874234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.874642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.874648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.875078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.875085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.875485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.875493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.875912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.875919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.876413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.876440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.876843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.876850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.877337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.877365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.877757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.877766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.878179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.878191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.878600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.878607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.879036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.879042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.879448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.879455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.879854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.879861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.880278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.880285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.880723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.880729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.881111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.881118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.881405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.881413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.881807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.881813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.882276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.882284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.882707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.882713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.306 [2024-07-15 21:45:22.883138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.306 [2024-07-15 21:45:22.883145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.306 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.883442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.883449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.883837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.883844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.884158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.884166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.884584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.884591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.884981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.884987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.885416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.885422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.885621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.885630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.885824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.885833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.886201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.886208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.886619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.886625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.887020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.887027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.887425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.887432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.887835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.887842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.888268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.888275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.888681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.888688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.888991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.888999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.889300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.889306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.889685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.889692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.890087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.890093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.890506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.890513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.890917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.890924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.891441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.891468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.891796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.891804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.892222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.892230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.892646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.892652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.893037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.893044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.893440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.893447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.893830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.893840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.894154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.894162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.894559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.894566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.894958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.894965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.895390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.895397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.895770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.895776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.896162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.896169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.896576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.896583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.896964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.896970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.897145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.897155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.897596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.897602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.898028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.898035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.898450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.898457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.307 qpair failed and we were unable to recover it. 00:29:33.307 [2024-07-15 21:45:22.898845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.307 [2024-07-15 21:45:22.898851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.899372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.899400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.899888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.899896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.900295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.900322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.900744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.900752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.901131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.901139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.901397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.901404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.901838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.901844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.902343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.902370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.902882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.902892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.903382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.903410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.903808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.903816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.904208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.904216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.904645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.904653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.905090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.905097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.905490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.905497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.905920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.905927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.906442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.906470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.906896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.906906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.907415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.907442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.907840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.907849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.908351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.908379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.908808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.908816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.909209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.909216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.909513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.909520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.909931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.909937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.910145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.910155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.910557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.910567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.910957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.910964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.911267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.911274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.911697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.911704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.912126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.912133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.912542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.912548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.912960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.912966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.913473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.913501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.913805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.913814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.914211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.914219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.914533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.914541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.914937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.914945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.915327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.915333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.915658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.915664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.308 qpair failed and we were unable to recover it. 00:29:33.308 [2024-07-15 21:45:22.916114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.308 [2024-07-15 21:45:22.916120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.916580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.916586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.917012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.917018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.917438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.917445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.917832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.917839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.918350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.918377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.918776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.918785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.919091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.919099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.919578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.919586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.919982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.919989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.920490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.920517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.920922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.920930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.921456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.921483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.921691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.921701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.922001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.922009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.922452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.922459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.922825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.922833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.923265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.923272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.923690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.923696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.924107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.924113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.924499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.924506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.924928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.924934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.925441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.925469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.925859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.925867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.926353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.926381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.926781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.926790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.927196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.927207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.927553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.927560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.927962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.927969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.928360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.928367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.928774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.928781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.929197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.929203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.929618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.929626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.930052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.930059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.930461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.930468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.930892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.930898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.931336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.931343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.931762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.931769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.932152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.932159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.932564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.932570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.932870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.309 [2024-07-15 21:45:22.932876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.309 qpair failed and we were unable to recover it. 00:29:33.309 [2024-07-15 21:45:22.933257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.933264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.933654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.933661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.933928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.933936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.934354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.934362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.934751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.934758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.935143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.935150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.935602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.935608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.935987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.935993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.936377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.936385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.936772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.936779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.937163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.937170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.937555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.937561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.937989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.937996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.938294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.938301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.938756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.938763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.939143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.939150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.939526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.939532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.939922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.939929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.940317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.940330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.940724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.940731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.941119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.941132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.941496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.941503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.941978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.941985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.942373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.942400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.942837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.942845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.943356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.943387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.943783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.943792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.944267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.944274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.944681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.944688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.945080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.945086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.945386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.945393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.945803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.945810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.946201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.946208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.946640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.310 [2024-07-15 21:45:22.946646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.310 qpair failed and we were unable to recover it. 00:29:33.310 [2024-07-15 21:45:22.947064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.947070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.947498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.947504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.947914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.947921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.948418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.948446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.948845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.948854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.949141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.949150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.949585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.949592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.949980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.949987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.950375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.950382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.950795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.950802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.951311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.951339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.951806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.951816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.952251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.952258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.952578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.952584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.952983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.952989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.953383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.953390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.953776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.953782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.954185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.954192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.954598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.954605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.955003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.955009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.955454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.955461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.955790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.955796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.956237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.956244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.956725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.956731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.957160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.957167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.957618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.957625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.958033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.958039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.958283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.958290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.958749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.958755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.959222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.959229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.959583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.959590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.959998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.960006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.960407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.960413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.960820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.960827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.961285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.961292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.961703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.961710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.962097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.962103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.962542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.962549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.962757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.962767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.963169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.963176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.311 qpair failed and we were unable to recover it. 00:29:33.311 [2024-07-15 21:45:22.963696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.311 [2024-07-15 21:45:22.963702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.964040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.964047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.964341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.964348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.964755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.964762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.965230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.965236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.965681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.965687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.966049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.966056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.966454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.966460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.966870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.966877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.967364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.967371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.967762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.967768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.968200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.968208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.968608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.968615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.969022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.969029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.969445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.969452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.969859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.969865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.970078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.970086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.970360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.970368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.970759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.970766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.970976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.970982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.971247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.971254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.971698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.971704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.972088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.972095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.972461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.972467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.972895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.972902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.973216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.973223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.973633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.973639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.974055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.974062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.974429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.974436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.974841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.974848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.975159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.975165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.975640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.975648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.976031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.976037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.312 [2024-07-15 21:45:22.976421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.312 [2024-07-15 21:45:22.976428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.312 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.976857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.976863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.977277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.977284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.977483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.977491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.977968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.977974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.978415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.978421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.978804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.978810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.979194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.979201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.979611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.979618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.980050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.980057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.980469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.980476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.980893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.980900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.981249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.981256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.981673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.981679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.982081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.982087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.982488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.982495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.982905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.982911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.983402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.983430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.983829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.983838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.984379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.984407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.984815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.984823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.985211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.985218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.985510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.985517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.985832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.985838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.986271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.986278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.986708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.986720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.987117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.987134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.987611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.987617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.988050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.988057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.988291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.988298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.988720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.988727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.989029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.989036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.989436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.989444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.989745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.989752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.990141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.990149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.990556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.990563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.990988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.990994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.991353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.991360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.313 [2024-07-15 21:45:22.991771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.313 [2024-07-15 21:45:22.991777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.313 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.992163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.992170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.992542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.992549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.992962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.992968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.993351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.993359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.993843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.993849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.994289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.994316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.994777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.994785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.995170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.995178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.995547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.995554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.995868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.995875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.996304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.996311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.996761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.996767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.997167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.997174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.997601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.997608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.998039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.998045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.998431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.998438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.998809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.998815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.999237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.999249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:22.999647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:22.999654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.000135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.000143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.000538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.000545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.000907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.000914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.001343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.001350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.001762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.001769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.002191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.002197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.002666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.002672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.003058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.003066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.003306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.003313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.003708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.003714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.004140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.004148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.004469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.004476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.004879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.004885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.005326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.005333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.005715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.005721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.006110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.006116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.006582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.006588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.006973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.006979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.007484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.314 [2024-07-15 21:45:23.007512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.314 qpair failed and we were unable to recover it. 00:29:33.314 [2024-07-15 21:45:23.007912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.007921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.008442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.008470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.008866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.008875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.009325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.009353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.009744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.009753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.010140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.010147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.010534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.010541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.010928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.010935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.011291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.011297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.011707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.011714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.012127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.012135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.012531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.012537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.012957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.012964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.013459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.013486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.013887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.013895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.014400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.014427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.014861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.014869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.015388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.015415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.015814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.015822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.016377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.016404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.016703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.016712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.017130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.017137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.017522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.017528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.017947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.017953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.018181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.018196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.315 [2024-07-15 21:45:23.018535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.315 [2024-07-15 21:45:23.018542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.315 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.018933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.018940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.019255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.019262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.019704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.019713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.020020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.020027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.020414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.020420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.020807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.020813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.021212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.021219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.021594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.021601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.022034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.022040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.022433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.022440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.022868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.022875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.023247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.023255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.023672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.023679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.023981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.023988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.024381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.024388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.024771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.024777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.025202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.025209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.025671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.025677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.025865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.025872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.026294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.026301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.026710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.026717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.027153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.027160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.027556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.027563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.027947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.027954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.028358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.028366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.028634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.028642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.029033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.029039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.029441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.029448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.029866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.029873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.030298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.030305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.030771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.030777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.031186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.031193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.031580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.031586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.032016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.032023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.032446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.032453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.032841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.032847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.033256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.033263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.033655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.033662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.034035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.034042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.034466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.034473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.316 qpair failed and we were unable to recover it. 00:29:33.316 [2024-07-15 21:45:23.034876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.316 [2024-07-15 21:45:23.034883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.035268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.035274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.035700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.035709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.036119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.036129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.036430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.036436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.036818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.036824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.037246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.037253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.037695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.037701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.038087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.038094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.038548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.038555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.038946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.038952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.039441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.039468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.039873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.039881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.040368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.040395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.040824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.040833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.041333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.041360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.041763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.041771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.042062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.042076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.042481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.042488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.042892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.042899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.043411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.043438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.043841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.043849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.044420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.044447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.044772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.044780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.045199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.045206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.045614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.045621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.046048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.046054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.046359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.046367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.046771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.046777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.047158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.047166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.047553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.047560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.317 qpair failed and we were unable to recover it. 00:29:33.317 [2024-07-15 21:45:23.047943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.317 [2024-07-15 21:45:23.047949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.048336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.048343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.048776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.048783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.048992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.049002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.049405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.049412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.049813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.049819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.050199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.050206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.050595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.050601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.050985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.050992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.051422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.051429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.051730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.051737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.052177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.052186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.052583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.052590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.052993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.052999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.053385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.053392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.053818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.053825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.054330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.054357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.054762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.054770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.055158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.055165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.055559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.055565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.055970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.055976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.056372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.056380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.056783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.056789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.057218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.057225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.057516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.057523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.057933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.057939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.058321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.058329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.058763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.058770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.059254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.059261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.059639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.059646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.060031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.060038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.060442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.060449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.060864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.060871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.061279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.061286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.061698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.061705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.062101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.062108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.062490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.062497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.062926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.062932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.063138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.063151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.063556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.063564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.318 [2024-07-15 21:45:23.063970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.318 [2024-07-15 21:45:23.063977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.318 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.064411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.064417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.064800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.064808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.065312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.065339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.065739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.065747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.066176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.066183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.066576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.066582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.066889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.066897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.067349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.067356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.067764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.067770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.068169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.068176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.068562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.068573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.068957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.068963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.069357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.069364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.069628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.069635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.070058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.070064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.070489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.070496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.070867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.070874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.071277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.071285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.071570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.071577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.071988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.071994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.072422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.072429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.072815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.072821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.073323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.073351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.073826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.073834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.074218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.074225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.074614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.074620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.075009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.075016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.075391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.075397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.075797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.075804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.076236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.076243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.076637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.076643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.077024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.077031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.077485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.077492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.077897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.077905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.078331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.078338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.078726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.078733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.079109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-07-15 21:45:23.079115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.319 qpair failed and we were unable to recover it. 00:29:33.319 [2024-07-15 21:45:23.079400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.079407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.079806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.079813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.080202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.080209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.080472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.080480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.080888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.080894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.081104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.081110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.081499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.081506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.081912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.081918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.082349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.082377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.082593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.082604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.082987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.082994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.083477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.083485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.083859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.083865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.084333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.084364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.084753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.084762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.085170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.085178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.085597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.085604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.086010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.086017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.086442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.086449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.086854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.086861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.087251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.087257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.087698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.087704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.088110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.088117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.088471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.088478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.320 qpair failed and we were unable to recover it. 00:29:33.320 [2024-07-15 21:45:23.088888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-07-15 21:45:23.088894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.089387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.089414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.089818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.089826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.090206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.090213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.090614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.090621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.091053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.091059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.091506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.091513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.091711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.091719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.092133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.092142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.092563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.092570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.092995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.093001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.093395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.093402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.093792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.093798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.094220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.094227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.094609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.094615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.095021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.095028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.095448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.095455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-15 21:45:23.095840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.321 [2024-07-15 21:45:23.095846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.096252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.096260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.096673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.096680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.097088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.097094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.097488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.097495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.097882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.097889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.098291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.098298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.098708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.098715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.099143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.099150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.099551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.099557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.099985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.099992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.100395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.100402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.100598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.100608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.101029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.101036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.101445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.101452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.101908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.101915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.102316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.102323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.102698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.102705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.103152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.103159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.103578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.103585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.104010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.104017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.104451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.104458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.104842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.104848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.105233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.105240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.105652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.105658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.106045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.106052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.106435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.106442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.106824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.106831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.107215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.107222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.107625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.107632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.108033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.108041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.108462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.108470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.108780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.108786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.109218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.109224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.109529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.109536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.109939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.109946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.110331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.110337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.110753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.110760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.111059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.111065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.111265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.111274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.111639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.111645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.112071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.112077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.112470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.112477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.112856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.112862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.113251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.113259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.113749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.113756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.114152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.114159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.114575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.114581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.115006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.115012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.115396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.115403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.115589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.115596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.115973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.115979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.116369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.116378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.116810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.116816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.117226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.117232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.117646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.117653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.603 [2024-07-15 21:45:23.118078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.603 [2024-07-15 21:45:23.118084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.603 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.118507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.118514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.118708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.118715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.119134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.119141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.119437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.119444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.119891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.119897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.120281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.120288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.120702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.120709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.121112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.121119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.121489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.121496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.121919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.121926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.122330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.122337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.122722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.122729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.123115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.123124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.123536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.123542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.123931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.123937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.124462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.124489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.124891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.124899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.125393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.125420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.125827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.125835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.126316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.126343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.126756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.126764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.127235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.127242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.127527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.127534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.127946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.127953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.128342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.128348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.128778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.128786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.129190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.129197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.129685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.129692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.604 [2024-07-15 21:45:23.129984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.604 [2024-07-15 21:45:23.129991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.604 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.130392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.130399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.130784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.130790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.131329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.131356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.131766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.131775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.132160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.132167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.132709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.132717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.133102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.133112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.133508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.133515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.133900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.133906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.134425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.134452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.134854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.134862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.135308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.135336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.135767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.135775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.136196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.136204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.136639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.136646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.136951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.136958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.137364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.137370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.137754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.137760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.138146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.138153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.138534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.138540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.138936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.138942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.139338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.139345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.139769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.139776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.140182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.140189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.140610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.140617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.141018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.141025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.141529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.605 [2024-07-15 21:45:23.141536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.605 qpair failed and we were unable to recover it. 00:29:33.605 [2024-07-15 21:45:23.141923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.141929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.142316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.142323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.142728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.142735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.143037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.143044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.143408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.143414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.143869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.143875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.144074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.144084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.144494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.144500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.144883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.144889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.145271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.145277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.145694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.145701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.145991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.145998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.146395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.146402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.146793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.146800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.147329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.147356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.147789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.147797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.148184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.148192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.148686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.148692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.149002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.149009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.149284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.149298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.149617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.149623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.150043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.150049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.150469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.150475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.150898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.150904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.151291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.151297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.151701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.151707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.152037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.152043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.152447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.152453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.152748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.606 [2024-07-15 21:45:23.152754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.606 qpair failed and we were unable to recover it. 00:29:33.606 [2024-07-15 21:45:23.153147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.153154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.153580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.153586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.153971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.153977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.154367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.154374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.154798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.154804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.155190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.155198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.155407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.155417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.155827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.155834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.156130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.156138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.156581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.156587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.156970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.156976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.157455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.157482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.157886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.157894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.158394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.158422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.158858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.158866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.159357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.159384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.159783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.159791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.160179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.160187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.160576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.160583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.161014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.161021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.161442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.161450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.161864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.161871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.162296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.162303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.162684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.162691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.163097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.163104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.163514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.163521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.163906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.163912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.607 [2024-07-15 21:45:23.164431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.607 [2024-07-15 21:45:23.164458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.607 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.164911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.164919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.165427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.165454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.165870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.165881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.166376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.166403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.166683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.166691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.167039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.167045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.167441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.167448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.167812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.167819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.168232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.168239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.168673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.168679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.169170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.169177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.169552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.169559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.169941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.169948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.170371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.170378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.170763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.170769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.171162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.171169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.171557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.171564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.171949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.171955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.608 [2024-07-15 21:45:23.172347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.608 [2024-07-15 21:45:23.172354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.608 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.172832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.172838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.173220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.173227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.173626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.173633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.174025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.174031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.174437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.174444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.174646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.174655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.175108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.175114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.175526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.175533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.175963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.175969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.176448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.176475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.176784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.176793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.177220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.177227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.177434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.177444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.177863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.177870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.178264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.178271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.178710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.178718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.179128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.179134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.179520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.179527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.179851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.179857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.180134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.180142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.180543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.180551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.180945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.180952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.181471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.181498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.181898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.181909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.182436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 [2024-07-15 21:45:23.182464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.182861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2367493 Killed "${NVMF_APP[@]}" "$@" 00:29:33.609 [2024-07-15 21:45:23.182869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.609 qpair failed and we were unable to recover it. 00:29:33.609 [2024-07-15 21:45:23.183374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.183401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 21:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:33.610 [2024-07-15 21:45:23.183716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.183725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 21:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:33.610 [2024-07-15 21:45:23.184141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.184149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 21:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:33.610 [2024-07-15 21:45:23.184432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.184439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 21:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:33.610 [2024-07-15 21:45:23.184714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.184721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 21:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:33.610 [2024-07-15 21:45:23.185111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.185118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 [2024-07-15 21:45:23.185615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.185622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 [2024-07-15 21:45:23.186002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.186009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 [2024-07-15 21:45:23.186447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.186458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 [2024-07-15 21:45:23.186851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.186858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 [2024-07-15 21:45:23.187268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.187275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 [2024-07-15 21:45:23.187681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.187688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 [2024-07-15 21:45:23.188094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.188100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 [2024-07-15 21:45:23.188490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.188497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 [2024-07-15 21:45:23.188884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.188891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 [2024-07-15 21:45:23.189403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.189431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 [2024-07-15 21:45:23.189850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.189859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 [2024-07-15 21:45:23.190356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.190384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 [2024-07-15 21:45:23.190808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.190816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 [2024-07-15 21:45:23.191279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.191287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 [2024-07-15 21:45:23.191695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.191703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 21:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2368522 00:29:33.610 [2024-07-15 21:45:23.192146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.192158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 21:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2368522 00:29:33.610 [2024-07-15 21:45:23.192491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.192499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 21:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:33.610 21:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2368522 ']' 00:29:33.610 21:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.610 [2024-07-15 21:45:23.192904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 [2024-07-15 21:45:23.192911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 21:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:33.610 21:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.610 [2024-07-15 21:45:23.193327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.610 [2024-07-15 21:45:23.193336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.610 qpair failed and we were unable to recover it. 00:29:33.610 21:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:33.611 [2024-07-15 21:45:23.193648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.193655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 21:45:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:33.611 [2024-07-15 21:45:23.194062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.194070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.194467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.194475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.194777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.194784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.195069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.195076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.195496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.195503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.195913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.195921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.196337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.196347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.196762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.196770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.197180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.197188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.197615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.197622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.197923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.197930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.198335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.198342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.198744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.198751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.199021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.199030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.199413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.199421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.199839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.199846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.200119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.200138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.200414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.200421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.200838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.200845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.201264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.201272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.201672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.201679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.202086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.202093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.202511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.202518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.202821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.202829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.203161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.203168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.203547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.203555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.203834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.611 [2024-07-15 21:45:23.203842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.611 qpair failed and we were unable to recover it. 00:29:33.611 [2024-07-15 21:45:23.204153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.204161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.204612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.204618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.205003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.205009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.205418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.205425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.205851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.205859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.206063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.206073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.206531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.206538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.206923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.206929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.207222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.207229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.207592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.207599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.207990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.207996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.208286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.208293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.208724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.208731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.209138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.209146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.209337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.209345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.209610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.209617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.210064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.210070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.210447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.210454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.210883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.210890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.211247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.211254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.211664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.211670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.211947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.211954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.212355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.212362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.212745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.212752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.213148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.213154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.213653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.213660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.213928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.213935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.214342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.214349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.214641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.612 [2024-07-15 21:45:23.214648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.612 qpair failed and we were unable to recover it. 00:29:33.612 [2024-07-15 21:45:23.215101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.215108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.215571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.215578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.215876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.215884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.216317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.216324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.216730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.216737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.216972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.216978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.217286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.217293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.217733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.217739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.217942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.217950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.218356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.218364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.218842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.218849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.219357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.219384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.219801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.219809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.220219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.220227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.220645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.220652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.221095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.221104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.221504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.221511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.221929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.221935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.222451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.222478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.222943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.222952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.223442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.223470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.223882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.223890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.224437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.224465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.224937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.224945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.225455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.225482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.225924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.225933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.226393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.226420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.226727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.226735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.227150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.227158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.227589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.227595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.227856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.227863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.228188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.613 [2024-07-15 21:45:23.228195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.613 qpair failed and we were unable to recover it. 00:29:33.613 [2024-07-15 21:45:23.228501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.228507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.228894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.228900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.229252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.229258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.229668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.229674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.230096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.230103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.230521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.230528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.230843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.230850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.231254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.231261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.231687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.231693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.232083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.232090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.232473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.232480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.232910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.232917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.233440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.233468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.233961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.233969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.234534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.234562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.234895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.234904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.235449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.235476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.235890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.235898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.236477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.236504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.236718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.236727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.236819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.236825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.237208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.237216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.237607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.237613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.238051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.238060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.238468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.238475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.614 [2024-07-15 21:45:23.238879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.614 [2024-07-15 21:45:23.238886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.614 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.239107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.239114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.239564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.239572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.239885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.239892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.240309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.240316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.240747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.240754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.241143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.241150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.241529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.241536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.241818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.241825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.242243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.242250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.242659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.242666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.243069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.243075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.243496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.243503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.243919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.243926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.244343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.244350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.244617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.244623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.244886] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:29:33.615 [2024-07-15 21:45:23.244931] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.615 [2024-07-15 21:45:23.245040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.245047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.245431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.245437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.245845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.245852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.246132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.246139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.246390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.246398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.246805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.246813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.247264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.247272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.247589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.247596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.248010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.248018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.248228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.248238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.248665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.248673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.249105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.249112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.249529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.249536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.249844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.249851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.250282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.250290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.250679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.250686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.251089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.251096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.251516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.251524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.251804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.251811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.252250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.252258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.252683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.252690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.253095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.253103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.253301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.253309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.253705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.253712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.254109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.254116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.254433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.254440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.254872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.254879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.255107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.255114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.255554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.255562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.255954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.255961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.615 qpair failed and we were unable to recover it. 00:29:33.615 [2024-07-15 21:45:23.256489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.615 [2024-07-15 21:45:23.256518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.256955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.256964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.257476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.257504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.257923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.257932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.258358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.258390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.258723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.258732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.258946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.258955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.259374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.259382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.259801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.259809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.260249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.260257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.260680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.260687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.260966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.260973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.261385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.261392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.261788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.261795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.262308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.262335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.262753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.262762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.263177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.263185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.263576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.263583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.264061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.264067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.264466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.264473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.264870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.264876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.265270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.265277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.265696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.265703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.266152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.266160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.266604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.266611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.267003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.267010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.267350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.267358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.267764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.267770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.268200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.268207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.268641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.268648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.269071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.269077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.269490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.269497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.269815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.269823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.270231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.270238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.270548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.270555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.270952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.270958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.271270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.271277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.271686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.271693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.271812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.271820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.272197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.272204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.272599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.272606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.616 [2024-07-15 21:45:23.273006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.616 [2024-07-15 21:45:23.273012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.616 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.273252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.273262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.273703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.273709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.274204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.274214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.274509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.274516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.274944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.274950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.275353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.275360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.275770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.275777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.276182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.276189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.276305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.276312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.617 [2024-07-15 21:45:23.276717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.276723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.277128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.277135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.277553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.277559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.277952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.277959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.278342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.278348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.278623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.278629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.279061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.279068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.279552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.279560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.279979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.279986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.280616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.280642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.281061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.281069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.281587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.281615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.281806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.281814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.282031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.282040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.282490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.282498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.282985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.282992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.283201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.283208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.283586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.283593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.283990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.283997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.284444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.284451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.284770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.284778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.285199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.285207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.285405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.285413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.285719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.285726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.286129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.286136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.286453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.286460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.286856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.286863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.287253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.287260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.287692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.287698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.288094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.288101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.288514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.288521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.288916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.288923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.289325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.289353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.289684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.289695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.290110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.290117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.290543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.290550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.290817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.290825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.291324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.291332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.291728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.291735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.292135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.292142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.292576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.292583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.292824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.292830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.293361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.293388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.293873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.293882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.294408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.294434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.617 [2024-07-15 21:45:23.294845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.617 [2024-07-15 21:45:23.294854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.617 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.295375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.295403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.295846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.295855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.296379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.296406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.296815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.296823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.297253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.297261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.297652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.297658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.298045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.298051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.298401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.298409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.298811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.298817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.299149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.299156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.299475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.299481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.299913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.299920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.300357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.300364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.300805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.300812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.301087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.301094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.301508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.301515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.301764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.301771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.302143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.302150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.302606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.302613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.303017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.303024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.303440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.303446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.303737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.303745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.304156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.304163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.304567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.304573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.304960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.304967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.305396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.305403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.305808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.305815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.306225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.306235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.306559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.306566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.307005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.307012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.307413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.307420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.307644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.307650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.308046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.308052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.308470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.308477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.308907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.308914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.309280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.309288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.309709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.309715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.310111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.310118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.310556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.310563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.310843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.310849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.311056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.311062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.311480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.311487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-07-15 21:45:23.311884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.618 [2024-07-15 21:45:23.311891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.312299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.312305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.312709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.312715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.313110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.313117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.313553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.313560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.313659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.313666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.314073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.314079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.314466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.314473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.314735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.314741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.315147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.315154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.315606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.315613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.316019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.316026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.316449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.316456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.316840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.316846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.317263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.317269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.317688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.317695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.318096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.318102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.318502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.318509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.318897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.318904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.319433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.319460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.319930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.319939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.320471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.320499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.320914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.320922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.321459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.321486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.321893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.321902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.322433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.322464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.322869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.322878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.323410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.323438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.323876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.323885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.324392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.324419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.324825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.324833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.325225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.325233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.325495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.325501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.325896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.325902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.326133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.326140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.326539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.326545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.326617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.326626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.327000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.327008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.327251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.327258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.327670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.327677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.328086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.328093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.328497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.328504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.328705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.328713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.329134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.329142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.329153] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:33.619 [2024-07-15 21:45:23.329599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.329606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.329851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.329857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.330216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.330223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.330599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.330605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.330828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.330834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.331205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.619 [2024-07-15 21:45:23.331212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-07-15 21:45:23.331532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.331539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.331942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.331948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.332348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.332355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.332746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.332752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.333030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.333036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.333467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.333474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.333866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.333873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.334183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.334190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.334624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.334632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.334902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.334910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.335336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.335344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.335609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.335615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.336021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.336028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.336341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.336349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.336563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.336571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.336855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.336862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.337279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.337286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.337692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.337698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.338115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.338125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.338433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.338441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.338857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.338863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.339071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.339077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.339344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.339351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.339792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.339798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.340205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.340212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.340567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.340574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.340987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.340994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.341444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.341450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.341656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.341665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.342129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.342137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.342551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.342557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.342824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.342830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.343263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.343270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.343569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.343577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.343991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.343998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.344401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.344408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.344655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.344662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.344991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.344997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.345519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.345526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.345718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.345726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.346143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.346150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.346603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.346610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.346999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.347005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.347501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.347508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.347888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.347895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.348350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.348378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-07-15 21:45:23.348780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.620 [2024-07-15 21:45:23.348788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.349108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.349115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.349543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.349551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.349960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.349967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.350483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.350510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.350986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.350994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.351549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.351576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.351891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.351900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.352448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.352475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.352916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.352924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.353338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.353365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.353831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.353839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.354365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.354392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.354831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.354839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.355338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.355365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.355769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.355777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.356036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.356043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.356347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.356354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.356747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.356754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.357169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.357176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.357597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.357603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.358008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.358015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.358442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.358448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.358883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.358889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.359284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.359291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.359733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.359739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.359952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.359959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.360374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.360381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.360803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.360809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.361207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.361214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.361657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.361663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.362091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.362097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.362477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.362484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.362902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.362909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.363417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.363444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.363847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.363856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.364372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.364400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.364831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.364840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.365228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.365236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.365638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.365645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.366041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.621 [2024-07-15 21:45:23.366048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.621 qpair failed and we were unable to recover it. 00:29:33.621 [2024-07-15 21:45:23.366366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.366373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.366786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.366792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.367220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.367227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.367627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.367633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.368025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.368032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.368345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.368352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.368757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.368764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.369175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.369182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.369478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.369487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.369946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.369952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.370388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.370395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.370804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.370812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.371231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.371238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.371550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.371557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.371948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.371954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.372355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.372362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.372753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.372760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.373168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.373175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.373518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.373525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.373955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.373962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.374200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.374207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.374631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.374637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.375020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.375026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.375356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.375363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.375788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.375794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.376208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.376215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.376551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.376557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.376870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.376876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.377295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.377301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.377705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.377711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.378136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.378144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.378516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.378523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.378950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.378956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.379203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.379210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.379560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.379566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.379953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.379959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.380356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.380363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.380754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.380760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.380985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.380991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.381475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.381482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.381871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.381877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.382403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.382431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.382833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.382841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.383229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.383236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.383746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.383753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.384139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.384145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.384556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.384564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.384925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.384932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.385308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.385320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.385704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.385710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.386078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.386084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.386501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.386508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-07-15 21:45:23.386908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.622 [2024-07-15 21:45:23.386914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-07-15 21:45:23.387382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.623 [2024-07-15 21:45:23.387410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-07-15 21:45:23.387687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.623 [2024-07-15 21:45:23.387695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-07-15 21:45:23.388086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.623 [2024-07-15 21:45:23.388092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-07-15 21:45:23.388493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.623 [2024-07-15 21:45:23.388501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-07-15 21:45:23.388907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.623 [2024-07-15 21:45:23.388914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-07-15 21:45:23.389367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.623 [2024-07-15 21:45:23.389394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-07-15 21:45:23.389708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.623 [2024-07-15 21:45:23.389717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-07-15 21:45:23.390149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.623 [2024-07-15 21:45:23.390156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.390550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.390558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.390976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.390984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.391411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.391418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.391854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.391862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.392380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.392407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.392813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.392821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.393236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.393244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.393679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.393686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.393916] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.894 [2024-07-15 21:45:23.393940] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.894 [2024-07-15 21:45:23.393948] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.894 [2024-07-15 21:45:23.393954] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.894 [2024-07-15 21:45:23.393959] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.894 [2024-07-15 21:45:23.394104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.394111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.394161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:33.894 [2024-07-15 21:45:23.394383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:33.894 [2024-07-15 21:45:23.394481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.394497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.394592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:33.894 [2024-07-15 21:45:23.394593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:33.894 [2024-07-15 21:45:23.394778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.394784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.395170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.395177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.395402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.395411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.395827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.395835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.396255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.396263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.396551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.396558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.397023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.397030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.397442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.397449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.397751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.397758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.398195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.398202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.398539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.398545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.398966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.398972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.399393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.399400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.399810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.894 [2024-07-15 21:45:23.399817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.894 qpair failed and we were unable to recover it. 00:29:33.894 [2024-07-15 21:45:23.400129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.400136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.400583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.400590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.401077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.401083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.401497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.401525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.401947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.401956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.402383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.402410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.402824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.402832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.403324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.403352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.403636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.403644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.404130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.404137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.404466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.404472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.404849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.404855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.405269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.405276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.405705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.405715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.406124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.406132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.406424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.406432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.406842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.406848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.407167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.407174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.407486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.407493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.407888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.407894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.408204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.408211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.408491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.408498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.408895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.408901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.409101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.409110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.409421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.409428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.409735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.409742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.410137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.410144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.410573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.410580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.410992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.410999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.411311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.411318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.411728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.411735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.412127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.412134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.412344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.412350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.412776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.412782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.413225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.413233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.413664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.413671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.414064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.414070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.414314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.414321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.414724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.414731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.415000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.895 [2024-07-15 21:45:23.415006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.895 qpair failed and we were unable to recover it. 00:29:33.895 [2024-07-15 21:45:23.415473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.415480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.415907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.415913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.416229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.416237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.416640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.416647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.417079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.417086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.417353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.417360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.417770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.417776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.418203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.418211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.418676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.418683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.418941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.418947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.419357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.419364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.419754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.419760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.420171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.420178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.420589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.420598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.421005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.421012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.421476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.421483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.421718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.421725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.422133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.422140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.422403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.422409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.422842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.422849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.423127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.423134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.423560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.423566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.423998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.424004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.424508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.424536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.424949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.424957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.425449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.425476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.425971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.425980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.426469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.426496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.426904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.426913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.427419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.427446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.427854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.427862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.428393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.428421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.428907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.428915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.429293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.429321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.429791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.429799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.430354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.430381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.430842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.430850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.431418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.431446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.431867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.896 [2024-07-15 21:45:23.431876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.896 qpair failed and we were unable to recover it. 00:29:33.896 [2024-07-15 21:45:23.432352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.432380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.432773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.432781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.432995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.433002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.433351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.433359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.433773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.433779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.434007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.434013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.434476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.434483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.434915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.434923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.435360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.435367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.435668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.435675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.436082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.436089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.436522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.436529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.436921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.436927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.437444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.437472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.437889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.437901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.438513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.438541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.438849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.438860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.439390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.439417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.439632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.439641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.440071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.440078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.440495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.440502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.440906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.440912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.441129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.441137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.441321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.441328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.441717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.441724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.442127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.442134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.442555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.442562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.442981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.442987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.443520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.443547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.443952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.443961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.444474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.444502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.444922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.444930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.445447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.445474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.445708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.445716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.446134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.446141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.446370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.446377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.446828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.446834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.447154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.447162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.447479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.447486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.447752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.447759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.897 qpair failed and we were unable to recover it. 00:29:33.897 [2024-07-15 21:45:23.448193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.897 [2024-07-15 21:45:23.448200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.448600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.448608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.449101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.449108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.449373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.449382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.449601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.449608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.450085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.450091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.450493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.450499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.450927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.450935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.451309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.451317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.451722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.451729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.452053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.452059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.452336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.452343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.452590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.452596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.453005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.453012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.453418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.453428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.453727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.453739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.454129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.454136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.454563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.454570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.454974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.454981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.455402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.455429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.455869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.455877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.456426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.456454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.456639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.456647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.457074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.457080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.457487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.457494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.457790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.457798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.458221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.458227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.458644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.458651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.458876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.458883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.459083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.459090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.459477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.459485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.459891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.459898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.460332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.460339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.460646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.898 [2024-07-15 21:45:23.460654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.898 qpair failed and we were unable to recover it. 00:29:33.898 [2024-07-15 21:45:23.461063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.461070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.461479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.461486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.461713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.461720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.462099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.462106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.462502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.462510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.462922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.462929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.463337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.463345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.463423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.463430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.463798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.463805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.464084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.464091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.464315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.464322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.464749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.464756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.465164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.465172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.465580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.465587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.465792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.465801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.466220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.466227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.466646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.466653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.467061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.467068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.467495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.467502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.467763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.467769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.468077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.468087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.468483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.468490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.468876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.468883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.469310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.469317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.469607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.469614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.470025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.470031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.470433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.470439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.470709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.470716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.471115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.471126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.471533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.471540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.471934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.471940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.472344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.472351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.472768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.472775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.473007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.473014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.473230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.473237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.473503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.473511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.473919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.473926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.474336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.474343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.474751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.474757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.475183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.475190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.899 qpair failed and we were unable to recover it. 00:29:33.899 [2024-07-15 21:45:23.475624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.899 [2024-07-15 21:45:23.475630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.476017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.476023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.476229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.476236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.476519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.476526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.476749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.476755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.477189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.477196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.477483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.477489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.477819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.477827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.478235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.478241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.478649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.478655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.479061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.479068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.479512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.479519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.479949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.479956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.480354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.480361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.480760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.480766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.481153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.481160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.481665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.481671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.481884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.481891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.482169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.482176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.482469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.482477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.482888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.482895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.483312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.483319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.483711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.483718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.484147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.484154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.484531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.484538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.484792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.484800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.485224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.485231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.485443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.485449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.485893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.485899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.486210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.486217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.486663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.486669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.487057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.487063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.487180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.487186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.487587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.487594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.488003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.488011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.488265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.488272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.488683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.488689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.489001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.489007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.489388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.489394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.489780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.489787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.900 [2024-07-15 21:45:23.490090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-15 21:45:23.490097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.900 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.490506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.490514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.490785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.490792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.491189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.491196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.491586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.491593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.491910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.491917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.492313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.492320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.492711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.492719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.493115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.493125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.493537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.493544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.493937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.493943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.494450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.494478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.494914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.494923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.495444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.495472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.495703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.495711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.496131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.496138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.496427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.496434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.496848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.496855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.497076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.497082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.497285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.497293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.497683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.497689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.497995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.498002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.498435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.498442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.498830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.498837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.499368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.499395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.499486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.499495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.499773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.499780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.500292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.500300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.500749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.500756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.501143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.501149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.501550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.501556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.501964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.501971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.502388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.502395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.502823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.502830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.503003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.503010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.503407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.503414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.503802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.503809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.504207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.504214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.504643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.504649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.504853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.504861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.901 [2024-07-15 21:45:23.505325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-15 21:45:23.505332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.901 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.505766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.505773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.505980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.505987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.506282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.506289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.506581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.506588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.506782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.506789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.506977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.506984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.507394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.507403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.507668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.507675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.508137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.508144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.508510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.508516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.508697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.508704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.509096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.509102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.509498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.509505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.509776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.509783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.510261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.510268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.510677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.510683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.511077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.511084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.511404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.511410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.511627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.511633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.511931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.511938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.512370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.512377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.512448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.512454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.512781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.512787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.513192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.513198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.513554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.513561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.513773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.513780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.514033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.514040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.514468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.514474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.514773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.514780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.515155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.515162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.515582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.515588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.515790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.515799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.516228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.516235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.516637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.516643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.516908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.516915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.517341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-15 21:45:23.517348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.902 qpair failed and we were unable to recover it. 00:29:33.902 [2024-07-15 21:45:23.517734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.517740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.518169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.518176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.518574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.518580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.518847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.518853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.519262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.519281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.519677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.519683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.519974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.519981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.520384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.520391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.520653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.520660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.520997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.521004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.521389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.521398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.521813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.521820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.522231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.522238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.522649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.522656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.522899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.522906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.523328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.523335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.523604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.523610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.524045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.524051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.524524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.524531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.524913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.524920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.525308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.525315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.525520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.525527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.525708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.525714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.526135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.526142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.526216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.526222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.526607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.526613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.526912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.526918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.527329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.527336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.527721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.527727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.528116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.528125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.528557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.528564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.528970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.528977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.529482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.529509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.529783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.529792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.530222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.530229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.530626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.530632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.531020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.903 [2024-07-15 21:45:23.531028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.903 qpair failed and we were unable to recover it. 00:29:33.903 [2024-07-15 21:45:23.531448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.531456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.531873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.531880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.532209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.532217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.532612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.532618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.533043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.533049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.533463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.533469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.533871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.533877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.534301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.534308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.534726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.534733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.535120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.535137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.535548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.535555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.535964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.535971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.536343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.536370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.536600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.536613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.537029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.537035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.537328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.537335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.537540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.537547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.537947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.537953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.538361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.538368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.538757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.538763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.539102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.539109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.539530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.539537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.539623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.539629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.539999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.540005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.540436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.540443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.540757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.540764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.541149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.541156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.541395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.541402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.541843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.541850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.542326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.542333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.542591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.542597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.543052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.543058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.543370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.543377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.543789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.543796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.544193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.544200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.544408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.544418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.544650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.544657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.544874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.544881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.545159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.545166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.904 [2024-07-15 21:45:23.545564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.904 [2024-07-15 21:45:23.545571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.904 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.545979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.545986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.546190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.546197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.546389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.546398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.546779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.546786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.547216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.547223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.547655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.547661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.548106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.548112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.548509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.548515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.548783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.548790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.548880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.548887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.549208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.549215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.549648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.549655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.550118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.550128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.550535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.550545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.550957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.550964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.551353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.551360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.551854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.551861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.552371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.552399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.552815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.552824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.553259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.553267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.553699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.553706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.554119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.554133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.554431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.554439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.554832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.554840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.555055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.555062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.555350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.555357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.555766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.555773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.556203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.556210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.556597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.556604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.556907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.556914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.557287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.557294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.557496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.557506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.557961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.557968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.558187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.558193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.558469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.558475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.558875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.558881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.559275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.559283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.559684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.559690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.905 [2024-07-15 21:45:23.560160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.905 [2024-07-15 21:45:23.560167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.905 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.560363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.560370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.560755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.560762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.561149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.561156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.561555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.561562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.561996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.562003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.562378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.562386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.562788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.562795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.563227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.563233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.563657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.563663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.564053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.564059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.564451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.564457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.564697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.564703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.564906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.564912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.565322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.565335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.565678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.565687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.565950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.565956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.566169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.566175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.566554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.566561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.566971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.566979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.567390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.567396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.567611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.567617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.568039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.568046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.568440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.568447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.568710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.568716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.569144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.569151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.569562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.569568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.569997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.570003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.570075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.570080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.570537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.570544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.570935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.570941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.571356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.571363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.571560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.571568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.571884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.571891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.572303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.572310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.572579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.572586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.572995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.573001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.573263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.573270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.573562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.573568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.573977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.573983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.906 qpair failed and we were unable to recover it. 00:29:33.906 [2024-07-15 21:45:23.574363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.906 [2024-07-15 21:45:23.574370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.574597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.574603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.575022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.575028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.575289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.575296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.575749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.575755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.576147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.576153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.576392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.576398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.576699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.576705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.577089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.577096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.577318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.577324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.577717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.577723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.577950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.577957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.578255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.578262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.578557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.578564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.578995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.579001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.579396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.579405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.579618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.579624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.580075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.580082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.907 qpair failed and we were unable to recover it. 00:29:33.907 [2024-07-15 21:45:23.580363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.907 [2024-07-15 21:45:23.580371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.580796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.580804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.581214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.581220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.581673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.581680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.582075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.582081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.582476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.582483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.582701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.582707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.583109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.583116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.583338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.583345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.583745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.583751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.584012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.584018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.584194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.584203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.584635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.584641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.584847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.584853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.585132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.585138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.585416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.585423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.585635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.585642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.586057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.586064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.586357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.586364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.586699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.586706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.587138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.587145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.587607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.587613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.587808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.587815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.588026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.588033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.588261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.588268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.588578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.588584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.588999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.589007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.589264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.589271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.589652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.589659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.590062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.590068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.590498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.590504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.590908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.590915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.591119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.591132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.591522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.591530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.591958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.591965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.592350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.592357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.592557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.592564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.593007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.593016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.593360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.593368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.593759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.593766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.908 qpair failed and we were unable to recover it. 00:29:33.908 [2024-07-15 21:45:23.594158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.908 [2024-07-15 21:45:23.594165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.594604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.594610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.594825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.594833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.595312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.595319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.595707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.595714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.596119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.596128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.596547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.596553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.596960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.596967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.597494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.597521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.598019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.598027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.598320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.598327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.598540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.598547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.598762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.598768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.599185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.599192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.599600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.599607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.600015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.600021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.600241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.600248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.600672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.600678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.601106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.601112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.601340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.601347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.601757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.601764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.602027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.602033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.602456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.602463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.602798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.602805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.602992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.603002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.603317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.603324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.603656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.603662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.603840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.603846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.604259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.604265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.604674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.604680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.605126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.605132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.605388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.605395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.605803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.605810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.606238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.606245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.606719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.606725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.606795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.606800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.607201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.607208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.607634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.607644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.608033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.608040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.909 [2024-07-15 21:45:23.608469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.909 [2024-07-15 21:45:23.608476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.909 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.608961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.608968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.609402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.609409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.609797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.609804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.610019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.610027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.610457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.610464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.610675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.610681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.610861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.610869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.611186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.611193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.611613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.611620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.612017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.612024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.612447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.612454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.612881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.612888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.613294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.613301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.613511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.613517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.613739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.613747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.614146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.614153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.614639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.614646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.615076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.615082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.615343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.615350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.615754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.615761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.616077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.616084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.616505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.616512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.616898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.616904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.617330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.617337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.617724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.617731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.618163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.618170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.618389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.618396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.618663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.618670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.618895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.618902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.619341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.619349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.619814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.619821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.620231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.620237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.620626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.620632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.621063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.621069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.621476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.621483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.621869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.621875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.622074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.622082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.622509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.622518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.622930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.910 [2024-07-15 21:45:23.622936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.910 qpair failed and we were unable to recover it. 00:29:33.910 [2024-07-15 21:45:23.623394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.623401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.623807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.623814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.624363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.624391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.624791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.624799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.625211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.625218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.625640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.625647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.626061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.626068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.626330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.626338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.626548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.626555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.626828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.626834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.627145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.627152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.627432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.627438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.627639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.627646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.628029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.628036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.628442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.628448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.628884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.628891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.629291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.629298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.629557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.629564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.629761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.629768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.630177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.630184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.630586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.630593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.630880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.630886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.631275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.631281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.631724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.631730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.632140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.632147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.632602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.632608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.632678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.632684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.632974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.632981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.633401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.633408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.633727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.633734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.634024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.634030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.634429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.634436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.634845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.634851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.635232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.635246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.635654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.635660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.636093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.636099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.636497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.636504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.636891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.636897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.637208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.637217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.911 qpair failed and we were unable to recover it. 00:29:33.911 [2024-07-15 21:45:23.637584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.911 [2024-07-15 21:45:23.637590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.637904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.637911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.638333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.638340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.638728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.638734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.638839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.638848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.639242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.639249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.639650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.639657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.639951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.639957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.640420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.640427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.640766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.640773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.641181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.641188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.641657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.641663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.642095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.642102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.642376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.642384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.642792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.642799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.643208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.643215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.643650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.643656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.644090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.644096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.644529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.644536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.644973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.644979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.645336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.645363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.645803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.645812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.646042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.646049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.646316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.646325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.646729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.646737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.647138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.647145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.647565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.647572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.647963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.647969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.648356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.648363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.648754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.648760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.649168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.649175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.649452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.649458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.649886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.649894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.650211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.912 [2024-07-15 21:45:23.650217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.912 qpair failed and we were unable to recover it. 00:29:33.912 [2024-07-15 21:45:23.650635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.650641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.651014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.651021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.651481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.651488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.651875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.651881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.652265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.652272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.652685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.652695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.652964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.652971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.653408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.653414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.653804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.653811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.654200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.654207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.654618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.654624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.655019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.655026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.655236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.655243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.655514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.655521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.655853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.655860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.656291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.656297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.656728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.656735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.657142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.657149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.657433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.657440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.657827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.657834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.658140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.658146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.658554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.658560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.658822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.658829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.659253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.659260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.659650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.659656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.660043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.660049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.660445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.660452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.660843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.660850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.661236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.661244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.661528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.661535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.661940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.661947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.662346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.662353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.662772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.662779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.663052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.663059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.663509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.663515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.663942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.663949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.664019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.664025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.664398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.664405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.664610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.664616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.665033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.665040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.913 qpair failed and we were unable to recover it. 00:29:33.913 [2024-07-15 21:45:23.665253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.913 [2024-07-15 21:45:23.665264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.665649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.665657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.665968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.665975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.666382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.666389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.666818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.666825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.667212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.667219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.667562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.667568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.667975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.667981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.668367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.668374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.668791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.668798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.669116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.669126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.669534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.669541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.669969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.669976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.670389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.670416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.670877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.670885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.671370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.671397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.671613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.671623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.672024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.672031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.672438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.672445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.672758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.672765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.673222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.673229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.673653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.673659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.674037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.674043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.674228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.674236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.674479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.674485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.674743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.674749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.675144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.675151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.675347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.675353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.675782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.675788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.676173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.676180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.676640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.676647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.677053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.677060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.677465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.677475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.677880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.677886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.678320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.678327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.678640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.678646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.678878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.678885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.679163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.679170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.679245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.679252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.679630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.679637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.914 qpair failed and we were unable to recover it. 00:29:33.914 [2024-07-15 21:45:23.680109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.914 [2024-07-15 21:45:23.680116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.680516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.680523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.680908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.680915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.681328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.681335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.681742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.681749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.682140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.682147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.682356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.682364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.682839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.682845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.683240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.683247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.683677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.683683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.684075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.684081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.684493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.684500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.684679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.684685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.684874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.684881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.685253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.685261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.685673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.685680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.685895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.685902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.686310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.686317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.686700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.686707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.686780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.686787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.687151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.687157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.687435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.687442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:33.915 [2024-07-15 21:45:23.687871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.915 [2024-07-15 21:45:23.687878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:33.915 qpair failed and we were unable to recover it. 00:29:34.191 [2024-07-15 21:45:23.688264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.191 [2024-07-15 21:45:23.688273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.191 qpair failed and we were unable to recover it. 00:29:34.191 [2024-07-15 21:45:23.688495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.191 [2024-07-15 21:45:23.688502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.191 qpair failed and we were unable to recover it. 00:29:34.191 [2024-07-15 21:45:23.688937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.191 [2024-07-15 21:45:23.688943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.191 qpair failed and we were unable to recover it. 00:29:34.191 [2024-07-15 21:45:23.689341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.191 [2024-07-15 21:45:23.689348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.191 qpair failed and we were unable to recover it. 00:29:34.191 [2024-07-15 21:45:23.689767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.191 [2024-07-15 21:45:23.689773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.191 qpair failed and we were unable to recover it. 00:29:34.191 [2024-07-15 21:45:23.689983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.191 [2024-07-15 21:45:23.689990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.191 qpair failed and we were unable to recover it. 00:29:34.191 [2024-07-15 21:45:23.690446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.191 [2024-07-15 21:45:23.690453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.191 qpair failed and we were unable to recover it. 00:29:34.191 [2024-07-15 21:45:23.690868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.191 [2024-07-15 21:45:23.690874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.191 qpair failed and we were unable to recover it. 00:29:34.191 [2024-07-15 21:45:23.691109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.191 [2024-07-15 21:45:23.691115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.191 qpair failed and we were unable to recover it. 00:29:34.191 [2024-07-15 21:45:23.691421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.191 [2024-07-15 21:45:23.691430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.691727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.691733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.692161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.692169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.692380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.692389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.692808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.692815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.693114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.693121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.693400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.693408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.693822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.693828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.694049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.694055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.694257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.694265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.694465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.694472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.694771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.694778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.695193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.695201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.695614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.695621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.696037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.696045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.696471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.696478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.696903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.696910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.697143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.697149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.697341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.697349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.697788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.697795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.698264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.698272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.698696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.698702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.699093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.699100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.699493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.699501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.699935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.699943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.700236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.700243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.700466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.700473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.700881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.700889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.701323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.701330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.701718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.701724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.702157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.702164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.702478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.702485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.702894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.702901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.703290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.703297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.703724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.703731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.704022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.704030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.704319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.704326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.704641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.704649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.705082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.705089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.192 [2024-07-15 21:45:23.705489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.192 [2024-07-15 21:45:23.705496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.192 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.705883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.705891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.706287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.706295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.706719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.706725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.707134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.707142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.707442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.707449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.707647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.707653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.708084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.708091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.708286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.708294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.708480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.708486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.708908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.708915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.709310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.709317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.709736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.709743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.710158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.710165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.710396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.710403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.710519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.710526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.710914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.710921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.711330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.711337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.711753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.711760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.712219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.712226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.712449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.712456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.712871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.712878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.713198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.713211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.713621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.713628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.714089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.714096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.714487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.714494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.714888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.714895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.715281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.715287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.715791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.715798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.716208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.716215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.716510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.716516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.716931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.716938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.717327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.717334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.717833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.717839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.718179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.193 [2024-07-15 21:45:23.718186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.193 qpair failed and we were unable to recover it. 00:29:34.193 [2024-07-15 21:45:23.718593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.718599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.718819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.718825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.719117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.719128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.719537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.719545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.719932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.719938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.720477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.720505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.720909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.720920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.721426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.721453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.721858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.721866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.722344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.722371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.722779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.722787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.723178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.723186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.723390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.723398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.723786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.723793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.724210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.724218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.724412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.724418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.724675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.724682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.725128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.725135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.725554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.725560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.726065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.726071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.726458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.726466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.726765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.726771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.727177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.727190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.727622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.727629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.728061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.728068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.728482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.728489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.728976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.728983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.729286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.729293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.729709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.729715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.729925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.729932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.730345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.730351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.730756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.730762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.731130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.731137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.731542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.731549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.731954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.731960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.732479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.732506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.732914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.732922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.733139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.733149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.733452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.733459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.733851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.194 [2024-07-15 21:45:23.733858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.194 qpair failed and we were unable to recover it. 00:29:34.194 [2024-07-15 21:45:23.734362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.734389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.734835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.734843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.735364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.735391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.735808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.735816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.736350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.736377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.736468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.736477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Write completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Write completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Write completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Write completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Write completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Write completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Write completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Write completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Write completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Write completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 Read completed with error (sct=0, sc=8) 00:29:34.195 starting I/O failed 00:29:34.195 [2024-07-15 21:45:23.736761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.195 [2024-07-15 21:45:23.737220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.737237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.737641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.737676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.738094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.738106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.738638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.738674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.739121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.739140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.739714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.739751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.740341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.740378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.740837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.740849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.741069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.741079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.741596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.741632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.742077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.742089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.742476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.742512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.742958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.742970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.743480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.743517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.743959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.743971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.744469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.744506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.744720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.744733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.745154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.745165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.745379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.195 [2024-07-15 21:45:23.745388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.195 qpair failed and we were unable to recover it. 00:29:34.195 [2024-07-15 21:45:23.745781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.745790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.746183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.746193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.746472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.746482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.746891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.746900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.747322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.747333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.747746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.747756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.747833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.747841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.748131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.748141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.748520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.748530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.748809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.748820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.749234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.749244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.749550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.749559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.749974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.749983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.750245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.750257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.750685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.750694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.751131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.751144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.751474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.751484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.751924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.751933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.752336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.752346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.752769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.752779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.752995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.753005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.753500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.753510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.753897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.753906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.754291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.754301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.754709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.754718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.755127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.755137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.755539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.755549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.756003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.756012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.756328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.756338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.756753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.756762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.757025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.757034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.196 [2024-07-15 21:45:23.757462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.196 [2024-07-15 21:45:23.757471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.196 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.757857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.757866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.758258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.758268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.758688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.758697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.759016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.759025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.759457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.759466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.759688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.759701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.760067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.760077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.760385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.760395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.760806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.760816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.761251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.761261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.761667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.761679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.762068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.762077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.762474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.762484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.762873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.762882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.763276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.763286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.763411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.763419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.763856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.763866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.764294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.764304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.764711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.764721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.765131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.765141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.765547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.765556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.765767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.765779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.766077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.766087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.766311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.766321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.766742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.766752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.767139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.767148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.767227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.767235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.767625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.767634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.768050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.768060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.768335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.768345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.768774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.768784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.769189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.769199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.769516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.769525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.769962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.769971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.770361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.770371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.770802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.770811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.771197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.771206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.197 qpair failed and we were unable to recover it. 00:29:34.197 [2024-07-15 21:45:23.771652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.197 [2024-07-15 21:45:23.771661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.772055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.772064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.772476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.772486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.772948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.772957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.773219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.773229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.773448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.773457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.773876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.773885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.774297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.774307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.774709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.774719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.774797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.774806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.775224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.775233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.775625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.775635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.775848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.775857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.776241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.776251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.776731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.776743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.777148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.777158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.777570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.777579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.777967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.777976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.778407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.778417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.778629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.778640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.778839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.778849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.779098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.779107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.779514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.779524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.779917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.779926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.780334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.780344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.780702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.780711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.780967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.780977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.781387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.781397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.781787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.781796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.782226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.782236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.782456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.782465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.782878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.782888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.783137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.783147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.783564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.783573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.783959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.783968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.784381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.784391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.784797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.784806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.785071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.785081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.785477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.785487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.198 qpair failed and we were unable to recover it. 00:29:34.198 [2024-07-15 21:45:23.785948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.198 [2024-07-15 21:45:23.785957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.786476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.786513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.786817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.786832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.787239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.787249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.787432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.787444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.787876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.787886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.788312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.788321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.788685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.788694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.789177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.789186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.789595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.789605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.790032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.790041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.790429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.790439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.790652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.790661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.791068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.791077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.791542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.791551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.791952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.791962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.792371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.792381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.792766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.792775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.793163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.793173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.793605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.793614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.794045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.794054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.794460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.794470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.794847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.794856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.795277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.795287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.795556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.795565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.795879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.795889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.796271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.796281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.796477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.796485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.796935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.796944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.797378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.797388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.797794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.797804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.798091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.798101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.798487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.798497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.798926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.798935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.799460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.799496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.799940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.799952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.199 [2024-07-15 21:45:23.800364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.199 [2024-07-15 21:45:23.800401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.199 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.800758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.800770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.801204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.801215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.801525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.801536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.801768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.801777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.802192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.802202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.802604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.802614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.803020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.803034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.803270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.803280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.803584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.803593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.803868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.803877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.804292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.804302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.804613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.804623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.804852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.804861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.805072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.805082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.805507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.805516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.805902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.805911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.806311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.806321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.806753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.806762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.807197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.807207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.807428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.807437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.807733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.807742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.808120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.808133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.808544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.808554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.808970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.808980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.809386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.809396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.809700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.809709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.810143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.810153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.810561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.810570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.810790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.810799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.811027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.811040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.811464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.811474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.811876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.811886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.812346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.812356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.812799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.812812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.813229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.813239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.813453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.813463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.813842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.813852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.200 [2024-07-15 21:45:23.814276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.200 [2024-07-15 21:45:23.814286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.200 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.814669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.814678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.815079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.815088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.815515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.815524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.815918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.815928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.816333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.816343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.816741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.816750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.817167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.817177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.817570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.817579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.817956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.817966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.818368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.818378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.818826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.818835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.819224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.819233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.819647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.819656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.820061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.820070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.820387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.820397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.820811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.820821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.821227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.821237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.821624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.821633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.822002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.822011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.822272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.822282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.822691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.822701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.822985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.822994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.823411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.823420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.823811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.823820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.824284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.824294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.824662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.824671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.825079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.825089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.825320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.825330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.825549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.825559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.825938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.201 [2024-07-15 21:45:23.825948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.201 qpair failed and we were unable to recover it. 00:29:34.201 [2024-07-15 21:45:23.826357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.826366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.826759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.826769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.827031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.827040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.827465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.827475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.827859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.827868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.828090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.828099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.828542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.828554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.828938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.828947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.829314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.829351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.829837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.829849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.830240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.830251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.830352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.830362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.830559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.830569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.830865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.830875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.831293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.831312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.831793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.831803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.832205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.832215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.832497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.832506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.832740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.832749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.833156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.833166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.833373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.833382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.833743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.833752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.834181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.834191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.834582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.834591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.835019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.835029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.835457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.835467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.835909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.835918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.836336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.836346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.836568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.836581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.836819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.836829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.837241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.837251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.837680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.837689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.838131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.838140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.838544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.838554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.838914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.838923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.839347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.839357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.839789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.839798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.840206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.840216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.840703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.840712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.841110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.202 [2024-07-15 21:45:23.841119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.202 qpair failed and we were unable to recover it. 00:29:34.202 [2024-07-15 21:45:23.841562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.841572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.841982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.841991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.842293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.842330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.842760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.842772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.843037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.843046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.843332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.843342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.843732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.843741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.844130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.844140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.844520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.844530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.844964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.844973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.845503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.845540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.846026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.846038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.846236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.846247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.846680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.846690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.847095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.847106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.847517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.847528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.847955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.847965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.848184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.848194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.848602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.848612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.849015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.849027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.849417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.849427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.849812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.849822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.850221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.850232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.850642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.850652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.851060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.851069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.851459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.851469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.851857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.851866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.852235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.852245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.852673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.852682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.853067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.853076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.853299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.853309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.853566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.853575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.853788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.853797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.854120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.854134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.854563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.854575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.855066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.855076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.855484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.855493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.203 qpair failed and we were unable to recover it. 00:29:34.203 [2024-07-15 21:45:23.855919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.203 [2024-07-15 21:45:23.855928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.856431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.856468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.856913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.856924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.857454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.857492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.857934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.857946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.858319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.858356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.858691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.858704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.859113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.859135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.859343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.859353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.859602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.859611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.860029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.860039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.860279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.860289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.860714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.860723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.860990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.860999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.861276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.861286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.861719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.861729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.862113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.862126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.862555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.862565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.862879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.862889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.863101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.863111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.863295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.863307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.863724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.863735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.864144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.864154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.864636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.864646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.865069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.865078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.865520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.865530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.865916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.865925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.866333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.866343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.866548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.866557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.866967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.866976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.867382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.867392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.867598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.867607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.868033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.868042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.868417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.868427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.868929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.868938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.869401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.869410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.869849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.869858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.870063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.870072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.870326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.870338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.870543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.870552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.870746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.870755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.204 [2024-07-15 21:45:23.871220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.204 [2024-07-15 21:45:23.871231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.204 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.871666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.871676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.872084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.872094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.872387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.872397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.872603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.872612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.873046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.873056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.873540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.873550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.873814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.873825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.874252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.874262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.874679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.874689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.875076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.875085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.875303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.875312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.875718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.875727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.875950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.875959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.876378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.876388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.876780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.876790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.877052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.877061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.877284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.877294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.877519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.877528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.877944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.877953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.878347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.878356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.878764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.878774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.879006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.879016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.879311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.879321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.879736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.879748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.880135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.880145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.880553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.880562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.880981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.880990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.881332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.881341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.881750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.881759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.882053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.882062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.882484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.882493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.882756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.882766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.883175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.883185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.883486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.883495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.205 qpair failed and we were unable to recover it. 00:29:34.205 [2024-07-15 21:45:23.883899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.205 [2024-07-15 21:45:23.883908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.884388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.884398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.884718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.884727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.885043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.885052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.885453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.885463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.885897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.885907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.886246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.886256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.886676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.886686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.886764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.886774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.887140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.887150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.887571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.887580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.888026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.888036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.888449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.888459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.888759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.888769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.889044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.889053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.889463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.889473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.889676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.889685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.890099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.890108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.890314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.890323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.890747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.890756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.890958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.890967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.891389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.891399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.891788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.891797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.892180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.892190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.892586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.892595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.892780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.892788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.893222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.893232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.893622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.893631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.894017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.894026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.894248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.894258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.894672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.894683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.895070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.895080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.895372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.895383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.895788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.895798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.896189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.896199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.896478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.896487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.896808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.896818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.897046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.897059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.897470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.897480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.897878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.897888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.898310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.898319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.206 [2024-07-15 21:45:23.898553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.206 [2024-07-15 21:45:23.898562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.206 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.898969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.898979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.899373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.899382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.899794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.899803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.900113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.900127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.900410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.900419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.900803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.900812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.901289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.901299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.901684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.901693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.902089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.902099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.902524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.902534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.902752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.902761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.903159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.903168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.903353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.903362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.903747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.903757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.904166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.904175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.904562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.904573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.904797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.904807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.905136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.905146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.905568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.905579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.905964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.905973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.906365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.906375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.906787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.906797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.907201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.907211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.907604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.907613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.908040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.908050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.908406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.908415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.908875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.908884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.909280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.909290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.909678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.909687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.910101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.910111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.910560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.910570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.910986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.910996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.911602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.911639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.911950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.911963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.912467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.912504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.913024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.913036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.913437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.913448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.913710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.913720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.914133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.914143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.914576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.914586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.915011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.207 [2024-07-15 21:45:23.915020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.207 qpair failed and we were unable to recover it. 00:29:34.207 [2024-07-15 21:45:23.915436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.915446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.915842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.915852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.916238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.916248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.916678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.916687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.917071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.917080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.917513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.917523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.917906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.917915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.918349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.918359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.918767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.918777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.919190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.919200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.919603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.919613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.920036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.920046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.920464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.920473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.920939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.920948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.921335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.921345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.921742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.921754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.921972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.921986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.922434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.922444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.922827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.922836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.923100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.923110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.923508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.923518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.923906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.923915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.924436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.924474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.924904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.924916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.925350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.925387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.925811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.925822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.926092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.926102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.926537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.926548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.926769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.926778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.927203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.927213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.927613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.927622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.928012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.928021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.928435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.928445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.928722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.928731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.929138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.929149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.929582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.929591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.929889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.929898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.930310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.930320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.930706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.930715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.931146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.931156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.931390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.931399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.208 qpair failed and we were unable to recover it. 00:29:34.208 [2024-07-15 21:45:23.931783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.208 [2024-07-15 21:45:23.931792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.932207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.932219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.932661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.932670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.933114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.933132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.933460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.933469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.933914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.933923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.934318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.934328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.934713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.934723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.935110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.935119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.935549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.935558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.935955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.935964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.936462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.936499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.936803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.936815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.937049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.937059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.937472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.937482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.937915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.937926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.938214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.938225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.938460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.938473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.938880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.938890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.939275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.939285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.939591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.939600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.940019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.940028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.940451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.940460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.940845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.940854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.941131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.941142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.941599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.941609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.942010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.942020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.942437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.942447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.942869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.942879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.943409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.943446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.943790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.943809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.944220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.944231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.944453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.944463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.944958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.944968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.945375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.945386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.945798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.209 [2024-07-15 21:45:23.945808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.209 qpair failed and we were unable to recover it. 00:29:34.209 [2024-07-15 21:45:23.946146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.946156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.946626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.946636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.947016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.947026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.947481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.947491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.947697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.947707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.948120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.948134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.948535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.948549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.948955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.948965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.949119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.949135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.949469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.949479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.949898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.949908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.950134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.950144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.950365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.950376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.950562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.950571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.950872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.950883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.951291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.951301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.951685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.951694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.951898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.951907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.952311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.952321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.952706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.952715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.953102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.953112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.953519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.953530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.953966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.953975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.954476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.954513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.954959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.954971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.955475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.955513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.955722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.955734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.956156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.956166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.956572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.956582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.956845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.956854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.957291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.957301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.957706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.957716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.958105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.958115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.958364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.958374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.958780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.958790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.959110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.959120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.959535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.959545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.959970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.959980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.960503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.960540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.960847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.960859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.961309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.210 [2024-07-15 21:45:23.961321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.210 qpair failed and we were unable to recover it. 00:29:34.210 [2024-07-15 21:45:23.961639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.961650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.961745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.961755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.962175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.962185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.962583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.962593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.963021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.963031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.963440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.963450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.963842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.963852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.964078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.964088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.964479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.964490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.964739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.964749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.965046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.965055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.965323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.965333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.965718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.965728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.966118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.966132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.966531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.966541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.966895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.966905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.967241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.967250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.967647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.967658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.967888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.967897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.968302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.968312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.968742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.968752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.968910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.968919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.969294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.969305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.969747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.969756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.970198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.970208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.970597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.970607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.971116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.971129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.971400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.971409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.971624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.971634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.972048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.972058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.972476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.972487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.972799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.972810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.973237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.973248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.973690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.973703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.974133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.974143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.974458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.974468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.974854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.974863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.975263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.975273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.975708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.975718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.975916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.975926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.211 [2024-07-15 21:45:23.976222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.211 [2024-07-15 21:45:23.976233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.211 qpair failed and we were unable to recover it. 00:29:34.212 [2024-07-15 21:45:23.976673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.212 [2024-07-15 21:45:23.976683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.212 qpair failed and we were unable to recover it. 00:29:34.212 [2024-07-15 21:45:23.977093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.212 [2024-07-15 21:45:23.977104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.212 qpair failed and we were unable to recover it. 00:29:34.212 [2024-07-15 21:45:23.977514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.212 [2024-07-15 21:45:23.977524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.212 qpair failed and we were unable to recover it. 00:29:34.212 [2024-07-15 21:45:23.977917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.212 [2024-07-15 21:45:23.977927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.212 qpair failed and we were unable to recover it. 00:29:34.212 [2024-07-15 21:45:23.978404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.212 [2024-07-15 21:45:23.978414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.212 qpair failed and we were unable to recover it. 00:29:34.212 [2024-07-15 21:45:23.978806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.212 [2024-07-15 21:45:23.978816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.212 qpair failed and we were unable to recover it. 00:29:34.212 [2024-07-15 21:45:23.979368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.212 [2024-07-15 21:45:23.979405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.212 qpair failed and we were unable to recover it. 00:29:34.212 [2024-07-15 21:45:23.979746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.212 [2024-07-15 21:45:23.979758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.212 qpair failed and we were unable to recover it. 00:29:34.212 [2024-07-15 21:45:23.980211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.212 [2024-07-15 21:45:23.980222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.212 qpair failed and we were unable to recover it. 00:29:34.212 [2024-07-15 21:45:23.980648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.212 [2024-07-15 21:45:23.980657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.212 qpair failed and we were unable to recover it. 00:29:34.212 [2024-07-15 21:45:23.981050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.212 [2024-07-15 21:45:23.981061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.212 qpair failed and we were unable to recover it. 00:29:34.212 [2024-07-15 21:45:23.981288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.212 [2024-07-15 21:45:23.981299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.212 qpair failed and we were unable to recover it. 00:29:34.212 [2024-07-15 21:45:23.981563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.212 [2024-07-15 21:45:23.981573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.212 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.981886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.981897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.982341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.982352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.982580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.982594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.983022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.983032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.983451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.983461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.983872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.983882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.984253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.984263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.984661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.984671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.985076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.985085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.985543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.985553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.985956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.985965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.986179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.986190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.986587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.986597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.987007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.987017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.987239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.987248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.987701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.987711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.988127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.988137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.988529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.988538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.988927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.988936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.989386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.989396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.989785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.989795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.990094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.990103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.990518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.990528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.990964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.990973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.991349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.991386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.991720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.991732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.992149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.992159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.992585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.992595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.992984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.992994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.993392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.993403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.993628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.993637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.993949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.993958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.994295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.994305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.994581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.994592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.995003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.995013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.995433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.995443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.482 [2024-07-15 21:45:23.995878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.482 [2024-07-15 21:45:23.995888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.482 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:23.996197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:23.996208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:23.996652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:23.996662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:23.997057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:23.997066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:23.997466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:23.997476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:23.997670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:23.997680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:23.998066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:23.998075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:23.998488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:23.998498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:23.998913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:23.998922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:23.999338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:23.999349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:23.999760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:23.999770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.000080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.000092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.000575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.000585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.000796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.000806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.001068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.001077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.001488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.001498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.001775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.001784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.002108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.002117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.002518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.002528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.002945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.002954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.003356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.003393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.003732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.003744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.004226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.004237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.004544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.004554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.004979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.004988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.005443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.005454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.005759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.005769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.006168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.006177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.006462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.006472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.006886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.006895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.007253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.007264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.007659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.007668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.007881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.007890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.008261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.008272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.008550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.008560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.008912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.008922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.009116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.009130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.009416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.009426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.483 [2024-07-15 21:45:24.009848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.483 [2024-07-15 21:45:24.009857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.483 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.010307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.010317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.010819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.010829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.011220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.011230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.011644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.011653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.012091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.012100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.012577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.012587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.012813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.012823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.013266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.013275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.013632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.013641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.014047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.014056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.014289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.014304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.014553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.014563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.015018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.015028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.015428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.015442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.015829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.015838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.016146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.016157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.016496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.016506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.016726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.016735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.017168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.017178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.017411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.017421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.017813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.017822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.018141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.018151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.018573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.018583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:34.484 [2024-07-15 21:45:24.019011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.019021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.019139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.019151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6469e0 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:34.484 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:34.484 [2024-07-15 21:45:24.019671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.019698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:34.484 [2024-07-15 21:45:24.019971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.019981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.484 [2024-07-15 21:45:24.020550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.020577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.020799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.020807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.021367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.021395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.021799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.021807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.022129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.022137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.022609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.022636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.023059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.023068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.023414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.023441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.484 [2024-07-15 21:45:24.023873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.484 [2024-07-15 21:45:24.023882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.484 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.024337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.024366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.024802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.024810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.025360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.025391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.025849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.025857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.026406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.026435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.026848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.026858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.027130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.027137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.027554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.027561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.027959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.027966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.028484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.028511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.028927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.028936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.029012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.029020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.029460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.029467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.029904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.029911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.030309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.030317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.030741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.030747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.031147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.031155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.031560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.031567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.032009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.032016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.032243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.032253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.032659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.032667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.033055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.033063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.033485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.033492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.033716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.033722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.034035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.034043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.034229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.034236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.034640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.034648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.035058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.035066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.035439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.035447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.035757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.035766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.036184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.036193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.036529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.036537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.036744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.036753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.037078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.037087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.037461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.037469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.037857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.037863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.038348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.038355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.038748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.485 [2024-07-15 21:45:24.038754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.485 qpair failed and we were unable to recover it. 00:29:34.485 [2024-07-15 21:45:24.039188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.039202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.039613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.039621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.039832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.039840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.040256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.040264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.040697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.040710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.041118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.041173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.041558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.041566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.041978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.041985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.042632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.042659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.043069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.043077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.043595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.043622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.044033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.044042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.044446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.044454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.044886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.044893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.045392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.045420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.045847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.045857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.046427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.046456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.046893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.046903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.047364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.047392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.047801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.047810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.048360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.048387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.048662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.048671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.049082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.049090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.049499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.049507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.049899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.049906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.050425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.050453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.050882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.050891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.050996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.051002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.051410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.051417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.051852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.051866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.052064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.052070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.052465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.052474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.052883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.486 [2024-07-15 21:45:24.052891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.486 qpair failed and we were unable to recover it. 00:29:34.486 [2024-07-15 21:45:24.053414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.053443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.053849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.053859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.054329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.054357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.054767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.054775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.055195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.055203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.055602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.055608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.056001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.056008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.056186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.056193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.056634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.056641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.056757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.056765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.057085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.057091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.057507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.057517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.057951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.057958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.058193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.058200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.058648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.058655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.487 [2024-07-15 21:45:24.059041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.059050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.059250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.059260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:34.487 [2024-07-15 21:45:24.059547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.059555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.487 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.487 [2024-07-15 21:45:24.059971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.059982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.060298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.060307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.060717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.060723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.060957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.060965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.061377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.061384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.061807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.061814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.062239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.062246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.062648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.062656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.063063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.063070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.063526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.063532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.063951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.063958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.064343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.064370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.064626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.064635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.064974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.064980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.065362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.065370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.065799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.065805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.066193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.066200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.066625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.066632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.067027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.067034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.067525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.067532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.067969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.487 [2024-07-15 21:45:24.067976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.487 qpair failed and we were unable to recover it. 00:29:34.487 [2024-07-15 21:45:24.068398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.068425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.068884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.068893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.069399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.069426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.069709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.069718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.070131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.070139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.070614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.070620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.071023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.071030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.071423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.071429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.071826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.071834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.072263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.072270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.072709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.072719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.073030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.073037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.073518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.073525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.073935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.073941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.074325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.074333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.074774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.074781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.075093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.075099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.075520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.075527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.075996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.076003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 Malloc0 00:29:34.488 [2024-07-15 21:45:24.076607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.076635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.488 [2024-07-15 21:45:24.077108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.077117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:34.488 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.488 [2024-07-15 21:45:24.077718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.077746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.488 [2024-07-15 21:45:24.078332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.078360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.078823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.078831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.079366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.079393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.079812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.488 [2024-07-15 21:45:24.079908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.079917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.080438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.080467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.080875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.080883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.081096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.081103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.081553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.081561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.081880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.081887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.082285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.082292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.082686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.082693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.083112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.083119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.083331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.083340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.083761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.083768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.084028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.084035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.084463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.488 [2024-07-15 21:45:24.084470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.488 qpair failed and we were unable to recover it. 00:29:34.488 [2024-07-15 21:45:24.084694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.084701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.085003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.085011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.085279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.085285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.085486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.085492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.085876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.085882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.086073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.086081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.086529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.086536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.086949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.086955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.087370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.087376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.087674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.087681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.087894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.087904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.088352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.088358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.489 [2024-07-15 21:45:24.088750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.088757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.489 [2024-07-15 21:45:24.089193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.089200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.489 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.489 [2024-07-15 21:45:24.089647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.089654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.090074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.090081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.090547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.090554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.090965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.090972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.091535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.091561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.092052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.092060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.092290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.092297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.092514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.092521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.092942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.092949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.093372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.093379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.093813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.093820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.094217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.094224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.094597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.094604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.094836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.094842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.095260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.095267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.095704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.095711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.096158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.096165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.489 [2024-07-15 21:45:24.096602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.096610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.489 [2024-07-15 21:45:24.097035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.097042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.489 [2024-07-15 21:45:24.097473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.097481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.489 [2024-07-15 21:45:24.097889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.097896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.098325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.098332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.098730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.098736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.099130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.489 [2024-07-15 21:45:24.099138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-15 21:45:24.099547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.099554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.099941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.099947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.100250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.100258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.100676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.100682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.101080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.101087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.101375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.101382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.101614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.101620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.101896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.101903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.101996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.102003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.102309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.102316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.102749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.102755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.103186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.103193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.103507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.103514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.103923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.103930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.104150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.104157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.104550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.104557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.490 [2024-07-15 21:45:24.104862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.104869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.490 [2024-07-15 21:45:24.105271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.105278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.490 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.490 [2024-07-15 21:45:24.105666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.105673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.105937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.105943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.106353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.106360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.106766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.106773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.107173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.107179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.107368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.107374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.107765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.107771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.108035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.490 [2024-07-15 21:45:24.108041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f299c000b90 with addr=10.0.0.2, port=4420 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-15 21:45:24.108049] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.490 [2024-07-15 21:45:24.110460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.490 [2024-07-15 21:45:24.110575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.490 [2024-07-15 21:45:24.110589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.490 [2024-07-15 21:45:24.110594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.490 [2024-07-15 21:45:24.110599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.490 [2024-07-15 21:45:24.110613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.490 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:34.490 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.490 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.490 [2024-07-15 21:45:24.120401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.490 [2024-07-15 21:45:24.120482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.490 [2024-07-15 21:45:24.120495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.490 [2024-07-15 21:45:24.120500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.490 [2024-07-15 21:45:24.120505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.491 [2024-07-15 21:45:24.120516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.491 21:45:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2367722 00:29:34.491 [2024-07-15 21:45:24.130410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-15 21:45:24.130483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-15 21:45:24.130495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-15 21:45:24.130501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-15 21:45:24.130505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.491 [2024-07-15 21:45:24.130516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-15 21:45:24.140394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-15 21:45:24.140576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-15 21:45:24.140588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-15 21:45:24.140593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-15 21:45:24.140597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.491 [2024-07-15 21:45:24.140608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-15 21:45:24.150386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-15 21:45:24.150465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-15 21:45:24.150477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-15 21:45:24.150482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-15 21:45:24.150486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.491 [2024-07-15 21:45:24.150497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-15 21:45:24.160430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-15 21:45:24.160501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-15 21:45:24.160513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-15 21:45:24.160519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-15 21:45:24.160523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.491 [2024-07-15 21:45:24.160533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-15 21:45:24.170447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-15 21:45:24.170521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-15 21:45:24.170533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-15 21:45:24.170538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-15 21:45:24.170542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.491 [2024-07-15 21:45:24.170553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-15 21:45:24.180489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-15 21:45:24.180559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-15 21:45:24.180571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-15 21:45:24.180576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-15 21:45:24.180580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.491 [2024-07-15 21:45:24.180591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-15 21:45:24.190478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-15 21:45:24.190552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-15 21:45:24.190564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-15 21:45:24.190569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-15 21:45:24.190573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.491 [2024-07-15 21:45:24.190584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-15 21:45:24.200518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-15 21:45:24.200603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-15 21:45:24.200615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-15 21:45:24.200620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-15 21:45:24.200624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.491 [2024-07-15 21:45:24.200635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-15 21:45:24.210566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-15 21:45:24.210630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-15 21:45:24.210642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-15 21:45:24.210647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-15 21:45:24.210654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.491 [2024-07-15 21:45:24.210665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-15 21:45:24.220571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-15 21:45:24.220642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-15 21:45:24.220654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-15 21:45:24.220659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-15 21:45:24.220663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.491 [2024-07-15 21:45:24.220673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-15 21:45:24.230660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-15 21:45:24.230739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-15 21:45:24.230751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-15 21:45:24.230756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-15 21:45:24.230760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.491 [2024-07-15 21:45:24.230770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-15 21:45:24.240659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-15 21:45:24.240729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-15 21:45:24.240741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-15 21:45:24.240746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-15 21:45:24.240750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.491 [2024-07-15 21:45:24.240760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-15 21:45:24.250672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-15 21:45:24.250741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-15 21:45:24.250753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-15 21:45:24.250758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-15 21:45:24.250762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.491 [2024-07-15 21:45:24.250772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-15 21:45:24.260571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-15 21:45:24.260641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-15 21:45:24.260654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-15 21:45:24.260659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-15 21:45:24.260663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.491 [2024-07-15 21:45:24.260674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.492 [2024-07-15 21:45:24.270714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.492 [2024-07-15 21:45:24.270788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.492 [2024-07-15 21:45:24.270800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.492 [2024-07-15 21:45:24.270805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.492 [2024-07-15 21:45:24.270809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.492 [2024-07-15 21:45:24.270820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.492 qpair failed and we were unable to recover it. 00:29:34.755 [2024-07-15 21:45:24.280743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.755 [2024-07-15 21:45:24.280812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.755 [2024-07-15 21:45:24.280825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.755 [2024-07-15 21:45:24.280830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.755 [2024-07-15 21:45:24.280834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.755 [2024-07-15 21:45:24.280845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.755 qpair failed and we were unable to recover it. 00:29:34.755 [2024-07-15 21:45:24.290774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.755 [2024-07-15 21:45:24.290849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.755 [2024-07-15 21:45:24.290868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.755 [2024-07-15 21:45:24.290874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.755 [2024-07-15 21:45:24.290879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.755 [2024-07-15 21:45:24.290893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.755 qpair failed and we were unable to recover it. 00:29:34.755 [2024-07-15 21:45:24.300833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.755 [2024-07-15 21:45:24.300946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.755 [2024-07-15 21:45:24.300965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.755 [2024-07-15 21:45:24.300975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.755 [2024-07-15 21:45:24.300980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.755 [2024-07-15 21:45:24.300995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.755 qpair failed and we were unable to recover it. 00:29:34.755 [2024-07-15 21:45:24.310818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.755 [2024-07-15 21:45:24.310891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.755 [2024-07-15 21:45:24.310905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.755 [2024-07-15 21:45:24.310910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.755 [2024-07-15 21:45:24.310914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.755 [2024-07-15 21:45:24.310925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.755 qpair failed and we were unable to recover it. 00:29:34.755 [2024-07-15 21:45:24.320865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.755 [2024-07-15 21:45:24.320935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.755 [2024-07-15 21:45:24.320947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.755 [2024-07-15 21:45:24.320952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.755 [2024-07-15 21:45:24.320957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.755 [2024-07-15 21:45:24.320967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.755 qpair failed and we were unable to recover it. 00:29:34.755 [2024-07-15 21:45:24.330881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.755 [2024-07-15 21:45:24.330946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.755 [2024-07-15 21:45:24.330958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.755 [2024-07-15 21:45:24.330963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.755 [2024-07-15 21:45:24.330967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.755 [2024-07-15 21:45:24.330978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.755 qpair failed and we were unable to recover it. 00:29:34.755 [2024-07-15 21:45:24.340952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.755 [2024-07-15 21:45:24.341059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.755 [2024-07-15 21:45:24.341071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.755 [2024-07-15 21:45:24.341076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.755 [2024-07-15 21:45:24.341080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.755 [2024-07-15 21:45:24.341090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.755 qpair failed and we were unable to recover it. 00:29:34.755 [2024-07-15 21:45:24.350937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.755 [2024-07-15 21:45:24.351012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.755 [2024-07-15 21:45:24.351025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.755 [2024-07-15 21:45:24.351030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.755 [2024-07-15 21:45:24.351034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.755 [2024-07-15 21:45:24.351044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.755 qpair failed and we were unable to recover it. 00:29:34.755 [2024-07-15 21:45:24.360981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.755 [2024-07-15 21:45:24.361051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.755 [2024-07-15 21:45:24.361063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.755 [2024-07-15 21:45:24.361069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.755 [2024-07-15 21:45:24.361073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.755 [2024-07-15 21:45:24.361083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.755 qpair failed and we were unable to recover it. 00:29:34.755 [2024-07-15 21:45:24.371044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.755 [2024-07-15 21:45:24.371115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.755 [2024-07-15 21:45:24.371131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.755 [2024-07-15 21:45:24.371137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.755 [2024-07-15 21:45:24.371141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.755 [2024-07-15 21:45:24.371152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.755 qpair failed and we were unable to recover it. 00:29:34.755 [2024-07-15 21:45:24.381016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.755 [2024-07-15 21:45:24.381087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.755 [2024-07-15 21:45:24.381100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.755 [2024-07-15 21:45:24.381105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.755 [2024-07-15 21:45:24.381109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.755 [2024-07-15 21:45:24.381119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.755 qpair failed and we were unable to recover it. 00:29:34.755 [2024-07-15 21:45:24.391046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.755 [2024-07-15 21:45:24.391119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.755 [2024-07-15 21:45:24.391137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.755 [2024-07-15 21:45:24.391142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.755 [2024-07-15 21:45:24.391146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.755 [2024-07-15 21:45:24.391157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.755 qpair failed and we were unable to recover it. 00:29:34.756 [2024-07-15 21:45:24.401177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.756 [2024-07-15 21:45:24.401254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.756 [2024-07-15 21:45:24.401266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.756 [2024-07-15 21:45:24.401271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.756 [2024-07-15 21:45:24.401275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.756 [2024-07-15 21:45:24.401286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.756 qpair failed and we were unable to recover it. 00:29:34.756 [2024-07-15 21:45:24.411193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.756 [2024-07-15 21:45:24.411264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.756 [2024-07-15 21:45:24.411276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.756 [2024-07-15 21:45:24.411281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.756 [2024-07-15 21:45:24.411285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.756 [2024-07-15 21:45:24.411296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.756 qpair failed and we were unable to recover it. 00:29:34.756 [2024-07-15 21:45:24.421190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.756 [2024-07-15 21:45:24.421260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.756 [2024-07-15 21:45:24.421271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.756 [2024-07-15 21:45:24.421276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.756 [2024-07-15 21:45:24.421281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.756 [2024-07-15 21:45:24.421291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.756 qpair failed and we were unable to recover it. 00:29:34.756 [2024-07-15 21:45:24.431237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.756 [2024-07-15 21:45:24.431316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.756 [2024-07-15 21:45:24.431329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.756 [2024-07-15 21:45:24.431334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.756 [2024-07-15 21:45:24.431338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.756 [2024-07-15 21:45:24.431354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.756 qpair failed and we were unable to recover it. 00:29:34.756 [2024-07-15 21:45:24.441172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.756 [2024-07-15 21:45:24.441270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.756 [2024-07-15 21:45:24.441282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.756 [2024-07-15 21:45:24.441287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.756 [2024-07-15 21:45:24.441291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.756 [2024-07-15 21:45:24.441302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.756 qpair failed and we were unable to recover it. 00:29:34.756 [2024-07-15 21:45:24.451094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.756 [2024-07-15 21:45:24.451173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.756 [2024-07-15 21:45:24.451185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.756 [2024-07-15 21:45:24.451190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.756 [2024-07-15 21:45:24.451194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.756 [2024-07-15 21:45:24.451206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.756 qpair failed and we were unable to recover it. 00:29:34.756 [2024-07-15 21:45:24.461224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.756 [2024-07-15 21:45:24.461305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.756 [2024-07-15 21:45:24.461318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.756 [2024-07-15 21:45:24.461323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.756 [2024-07-15 21:45:24.461327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.756 [2024-07-15 21:45:24.461337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.756 qpair failed and we were unable to recover it. 00:29:34.756 [2024-07-15 21:45:24.471186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.756 [2024-07-15 21:45:24.471260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.756 [2024-07-15 21:45:24.471272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.756 [2024-07-15 21:45:24.471277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.756 [2024-07-15 21:45:24.471282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.756 [2024-07-15 21:45:24.471292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.756 qpair failed and we were unable to recover it. 00:29:34.756 [2024-07-15 21:45:24.481401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.756 [2024-07-15 21:45:24.481468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.756 [2024-07-15 21:45:24.481484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.756 [2024-07-15 21:45:24.481489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.756 [2024-07-15 21:45:24.481493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.756 [2024-07-15 21:45:24.481504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.756 qpair failed and we were unable to recover it. 00:29:34.756 [2024-07-15 21:45:24.491336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.756 [2024-07-15 21:45:24.491407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.756 [2024-07-15 21:45:24.491420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.756 [2024-07-15 21:45:24.491424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.756 [2024-07-15 21:45:24.491428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.756 [2024-07-15 21:45:24.491439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.756 qpair failed and we were unable to recover it. 00:29:34.756 [2024-07-15 21:45:24.501286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.756 [2024-07-15 21:45:24.501365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.756 [2024-07-15 21:45:24.501378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.756 [2024-07-15 21:45:24.501383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.756 [2024-07-15 21:45:24.501387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.756 [2024-07-15 21:45:24.501397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.756 qpair failed and we were unable to recover it. 00:29:34.756 [2024-07-15 21:45:24.511385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.756 [2024-07-15 21:45:24.511466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.756 [2024-07-15 21:45:24.511478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.756 [2024-07-15 21:45:24.511482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.756 [2024-07-15 21:45:24.511486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.756 [2024-07-15 21:45:24.511497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.756 qpair failed and we were unable to recover it. 00:29:34.756 [2024-07-15 21:45:24.521417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.756 [2024-07-15 21:45:24.521500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.756 [2024-07-15 21:45:24.521513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.756 [2024-07-15 21:45:24.521518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.756 [2024-07-15 21:45:24.521522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.756 [2024-07-15 21:45:24.521535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.756 qpair failed and we were unable to recover it. 00:29:34.756 [2024-07-15 21:45:24.531483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.756 [2024-07-15 21:45:24.531551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.756 [2024-07-15 21:45:24.531563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.756 [2024-07-15 21:45:24.531568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.756 [2024-07-15 21:45:24.531572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.756 [2024-07-15 21:45:24.531583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.756 qpair failed and we were unable to recover it. 00:29:34.756 [2024-07-15 21:45:24.541481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.756 [2024-07-15 21:45:24.541552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.756 [2024-07-15 21:45:24.541565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.757 [2024-07-15 21:45:24.541570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.757 [2024-07-15 21:45:24.541574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.757 [2024-07-15 21:45:24.541584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.757 qpair failed and we were unable to recover it. 00:29:34.757 [2024-07-15 21:45:24.551482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.757 [2024-07-15 21:45:24.551560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.757 [2024-07-15 21:45:24.551572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.757 [2024-07-15 21:45:24.551577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.757 [2024-07-15 21:45:24.551581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:34.757 [2024-07-15 21:45:24.551592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.757 qpair failed and we were unable to recover it. 00:29:35.019 [2024-07-15 21:45:24.561499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.019 [2024-07-15 21:45:24.561570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.019 [2024-07-15 21:45:24.561583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.019 [2024-07-15 21:45:24.561587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.019 [2024-07-15 21:45:24.561592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.019 [2024-07-15 21:45:24.561602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.019 qpair failed and we were unable to recover it. 00:29:35.019 [2024-07-15 21:45:24.571553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.019 [2024-07-15 21:45:24.571623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.019 [2024-07-15 21:45:24.571635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.019 [2024-07-15 21:45:24.571640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.019 [2024-07-15 21:45:24.571644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.019 [2024-07-15 21:45:24.571654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.019 qpair failed and we were unable to recover it. 00:29:35.019 [2024-07-15 21:45:24.581462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.019 [2024-07-15 21:45:24.581536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.019 [2024-07-15 21:45:24.581548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.019 [2024-07-15 21:45:24.581553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.019 [2024-07-15 21:45:24.581557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.019 [2024-07-15 21:45:24.581567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.019 qpair failed and we were unable to recover it. 00:29:35.019 [2024-07-15 21:45:24.591521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.019 [2024-07-15 21:45:24.591615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.019 [2024-07-15 21:45:24.591628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.019 [2024-07-15 21:45:24.591632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.019 [2024-07-15 21:45:24.591636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.019 [2024-07-15 21:45:24.591647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.019 qpair failed and we were unable to recover it. 00:29:35.019 [2024-07-15 21:45:24.601624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.019 [2024-07-15 21:45:24.601690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.019 [2024-07-15 21:45:24.601702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.019 [2024-07-15 21:45:24.601707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.019 [2024-07-15 21:45:24.601711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.019 [2024-07-15 21:45:24.601721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.019 qpair failed and we were unable to recover it. 00:29:35.019 [2024-07-15 21:45:24.611650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.019 [2024-07-15 21:45:24.611716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.019 [2024-07-15 21:45:24.611728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.019 [2024-07-15 21:45:24.611733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.019 [2024-07-15 21:45:24.611740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.019 [2024-07-15 21:45:24.611750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.019 qpair failed and we were unable to recover it. 00:29:35.019 [2024-07-15 21:45:24.621664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.019 [2024-07-15 21:45:24.621737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.019 [2024-07-15 21:45:24.621755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.019 [2024-07-15 21:45:24.621762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.019 [2024-07-15 21:45:24.621766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.020 [2024-07-15 21:45:24.621780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.020 qpair failed and we were unable to recover it. 00:29:35.020 [2024-07-15 21:45:24.631752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.020 [2024-07-15 21:45:24.631838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.020 [2024-07-15 21:45:24.631857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.020 [2024-07-15 21:45:24.631862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.020 [2024-07-15 21:45:24.631867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.020 [2024-07-15 21:45:24.631881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.020 qpair failed and we were unable to recover it. 00:29:35.020 [2024-07-15 21:45:24.641734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.020 [2024-07-15 21:45:24.641801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.020 [2024-07-15 21:45:24.641814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.020 [2024-07-15 21:45:24.641819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.020 [2024-07-15 21:45:24.641824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.020 [2024-07-15 21:45:24.641835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.020 qpair failed and we were unable to recover it. 00:29:35.020 [2024-07-15 21:45:24.651743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.020 [2024-07-15 21:45:24.651810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.020 [2024-07-15 21:45:24.651823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.020 [2024-07-15 21:45:24.651828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.020 [2024-07-15 21:45:24.651832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.020 [2024-07-15 21:45:24.651843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.020 qpair failed and we were unable to recover it. 00:29:35.020 [2024-07-15 21:45:24.661835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.020 [2024-07-15 21:45:24.661920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.020 [2024-07-15 21:45:24.661932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.020 [2024-07-15 21:45:24.661937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.020 [2024-07-15 21:45:24.661941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.020 [2024-07-15 21:45:24.661952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.020 qpair failed and we were unable to recover it. 00:29:35.020 [2024-07-15 21:45:24.671819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.020 [2024-07-15 21:45:24.671900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.020 [2024-07-15 21:45:24.671918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.020 [2024-07-15 21:45:24.671924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.020 [2024-07-15 21:45:24.671929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.020 [2024-07-15 21:45:24.671943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.020 qpair failed and we were unable to recover it. 00:29:35.020 [2024-07-15 21:45:24.681846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.020 [2024-07-15 21:45:24.681918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.020 [2024-07-15 21:45:24.681936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.020 [2024-07-15 21:45:24.681942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.020 [2024-07-15 21:45:24.681947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.020 [2024-07-15 21:45:24.681961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.020 qpair failed and we were unable to recover it. 00:29:35.020 [2024-07-15 21:45:24.691865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.020 [2024-07-15 21:45:24.691935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.020 [2024-07-15 21:45:24.691948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.020 [2024-07-15 21:45:24.691953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.020 [2024-07-15 21:45:24.691958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.020 [2024-07-15 21:45:24.691969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.020 qpair failed and we were unable to recover it. 00:29:35.020 [2024-07-15 21:45:24.701903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.020 [2024-07-15 21:45:24.701975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.020 [2024-07-15 21:45:24.701988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.020 [2024-07-15 21:45:24.701996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.020 [2024-07-15 21:45:24.702001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.020 [2024-07-15 21:45:24.702012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.020 qpair failed and we were unable to recover it. 00:29:35.020 [2024-07-15 21:45:24.711959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.020 [2024-07-15 21:45:24.712035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.020 [2024-07-15 21:45:24.712047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.020 [2024-07-15 21:45:24.712052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.020 [2024-07-15 21:45:24.712056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.020 [2024-07-15 21:45:24.712067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.020 qpair failed and we were unable to recover it. 00:29:35.020 [2024-07-15 21:45:24.721995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.020 [2024-07-15 21:45:24.722112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.020 [2024-07-15 21:45:24.722128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.020 [2024-07-15 21:45:24.722133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.020 [2024-07-15 21:45:24.722137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.020 [2024-07-15 21:45:24.722148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.020 qpair failed and we were unable to recover it. 00:29:35.020 [2024-07-15 21:45:24.731980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.020 [2024-07-15 21:45:24.732049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.020 [2024-07-15 21:45:24.732062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.020 [2024-07-15 21:45:24.732067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.020 [2024-07-15 21:45:24.732071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.020 [2024-07-15 21:45:24.732082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.020 qpair failed and we were unable to recover it. 00:29:35.020 [2024-07-15 21:45:24.742024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.020 [2024-07-15 21:45:24.742095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.020 [2024-07-15 21:45:24.742107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.020 [2024-07-15 21:45:24.742111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.020 [2024-07-15 21:45:24.742116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.020 [2024-07-15 21:45:24.742129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.020 qpair failed and we were unable to recover it. 00:29:35.020 [2024-07-15 21:45:24.752038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.020 [2024-07-15 21:45:24.752111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.020 [2024-07-15 21:45:24.752126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.020 [2024-07-15 21:45:24.752131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.020 [2024-07-15 21:45:24.752135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.020 [2024-07-15 21:45:24.752146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.020 qpair failed and we were unable to recover it. 00:29:35.020 [2024-07-15 21:45:24.762071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.020 [2024-07-15 21:45:24.762134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.020 [2024-07-15 21:45:24.762146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.020 [2024-07-15 21:45:24.762151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.020 [2024-07-15 21:45:24.762155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.020 [2024-07-15 21:45:24.762166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.020 qpair failed and we were unable to recover it. 00:29:35.020 [2024-07-15 21:45:24.772096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.021 [2024-07-15 21:45:24.772170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.021 [2024-07-15 21:45:24.772183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.021 [2024-07-15 21:45:24.772188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.021 [2024-07-15 21:45:24.772192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.021 [2024-07-15 21:45:24.772203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.021 qpair failed and we were unable to recover it. 00:29:35.021 [2024-07-15 21:45:24.782148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.021 [2024-07-15 21:45:24.782220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.021 [2024-07-15 21:45:24.782232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.021 [2024-07-15 21:45:24.782237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.021 [2024-07-15 21:45:24.782241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.021 [2024-07-15 21:45:24.782252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.021 qpair failed and we were unable to recover it. 00:29:35.021 [2024-07-15 21:45:24.792173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.021 [2024-07-15 21:45:24.792250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.021 [2024-07-15 21:45:24.792264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.021 [2024-07-15 21:45:24.792269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.021 [2024-07-15 21:45:24.792273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.021 [2024-07-15 21:45:24.792284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.021 qpair failed and we were unable to recover it. 00:29:35.021 [2024-07-15 21:45:24.802253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.021 [2024-07-15 21:45:24.802321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.021 [2024-07-15 21:45:24.802333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.021 [2024-07-15 21:45:24.802338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.021 [2024-07-15 21:45:24.802342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.021 [2024-07-15 21:45:24.802352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.021 qpair failed and we were unable to recover it. 00:29:35.021 [2024-07-15 21:45:24.812229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.021 [2024-07-15 21:45:24.812304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.021 [2024-07-15 21:45:24.812316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.021 [2024-07-15 21:45:24.812321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.021 [2024-07-15 21:45:24.812325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.021 [2024-07-15 21:45:24.812335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.021 qpair failed and we were unable to recover it. 00:29:35.021 [2024-07-15 21:45:24.822272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.021 [2024-07-15 21:45:24.822344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.021 [2024-07-15 21:45:24.822356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.021 [2024-07-15 21:45:24.822360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.021 [2024-07-15 21:45:24.822364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.021 [2024-07-15 21:45:24.822375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.021 qpair failed and we were unable to recover it. 00:29:35.283 [2024-07-15 21:45:24.832276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.283 [2024-07-15 21:45:24.832350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.283 [2024-07-15 21:45:24.832363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.284 [2024-07-15 21:45:24.832368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.284 [2024-07-15 21:45:24.832372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.284 [2024-07-15 21:45:24.832385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.284 qpair failed and we were unable to recover it. 00:29:35.284 [2024-07-15 21:45:24.842315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.284 [2024-07-15 21:45:24.842385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.284 [2024-07-15 21:45:24.842397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.284 [2024-07-15 21:45:24.842402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.284 [2024-07-15 21:45:24.842406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.284 [2024-07-15 21:45:24.842417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.284 qpair failed and we were unable to recover it. 00:29:35.284 [2024-07-15 21:45:24.852362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.284 [2024-07-15 21:45:24.852435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.284 [2024-07-15 21:45:24.852447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.284 [2024-07-15 21:45:24.852452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.284 [2024-07-15 21:45:24.852456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.284 [2024-07-15 21:45:24.852466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.284 qpair failed and we were unable to recover it. 00:29:35.284 [2024-07-15 21:45:24.862374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.284 [2024-07-15 21:45:24.862476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.284 [2024-07-15 21:45:24.862488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.284 [2024-07-15 21:45:24.862493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.284 [2024-07-15 21:45:24.862497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.284 [2024-07-15 21:45:24.862508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.284 qpair failed and we were unable to recover it. 00:29:35.284 [2024-07-15 21:45:24.872402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.284 [2024-07-15 21:45:24.872475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.284 [2024-07-15 21:45:24.872487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.284 [2024-07-15 21:45:24.872492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.284 [2024-07-15 21:45:24.872496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.284 [2024-07-15 21:45:24.872506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.284 qpair failed and we were unable to recover it. 00:29:35.284 [2024-07-15 21:45:24.882319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.284 [2024-07-15 21:45:24.882388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.284 [2024-07-15 21:45:24.882403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.284 [2024-07-15 21:45:24.882409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.284 [2024-07-15 21:45:24.882413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.284 [2024-07-15 21:45:24.882423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.284 qpair failed and we were unable to recover it. 00:29:35.284 [2024-07-15 21:45:24.892423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.284 [2024-07-15 21:45:24.892492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.284 [2024-07-15 21:45:24.892504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.284 [2024-07-15 21:45:24.892509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.284 [2024-07-15 21:45:24.892514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.284 [2024-07-15 21:45:24.892524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.284 qpair failed and we were unable to recover it. 00:29:35.284 [2024-07-15 21:45:24.902508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.284 [2024-07-15 21:45:24.902583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.284 [2024-07-15 21:45:24.902595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.284 [2024-07-15 21:45:24.902599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.284 [2024-07-15 21:45:24.902603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.284 [2024-07-15 21:45:24.902614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.284 qpair failed and we were unable to recover it. 00:29:35.284 [2024-07-15 21:45:24.912557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.284 [2024-07-15 21:45:24.912637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.284 [2024-07-15 21:45:24.912649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.284 [2024-07-15 21:45:24.912654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.284 [2024-07-15 21:45:24.912658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.284 [2024-07-15 21:45:24.912668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.284 qpair failed and we were unable to recover it. 00:29:35.284 [2024-07-15 21:45:24.922553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.284 [2024-07-15 21:45:24.922622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.284 [2024-07-15 21:45:24.922634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.284 [2024-07-15 21:45:24.922639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.284 [2024-07-15 21:45:24.922643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.284 [2024-07-15 21:45:24.922657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.284 qpair failed and we were unable to recover it. 00:29:35.284 [2024-07-15 21:45:24.932476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.284 [2024-07-15 21:45:24.932542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.284 [2024-07-15 21:45:24.932555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.284 [2024-07-15 21:45:24.932560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.284 [2024-07-15 21:45:24.932564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.284 [2024-07-15 21:45:24.932575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.284 qpair failed and we were unable to recover it. 00:29:35.284 [2024-07-15 21:45:24.942610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.284 [2024-07-15 21:45:24.942679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.284 [2024-07-15 21:45:24.942692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.284 [2024-07-15 21:45:24.942697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.284 [2024-07-15 21:45:24.942701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.284 [2024-07-15 21:45:24.942711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.284 qpair failed and we were unable to recover it. 00:29:35.284 [2024-07-15 21:45:24.952640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.284 [2024-07-15 21:45:24.952731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.284 [2024-07-15 21:45:24.952743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.284 [2024-07-15 21:45:24.952748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.284 [2024-07-15 21:45:24.952752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.284 [2024-07-15 21:45:24.952762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.284 qpair failed and we were unable to recover it. 00:29:35.284 [2024-07-15 21:45:24.962663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.284 [2024-07-15 21:45:24.962733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.284 [2024-07-15 21:45:24.962752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.284 [2024-07-15 21:45:24.962758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.284 [2024-07-15 21:45:24.962762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.284 [2024-07-15 21:45:24.962776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.284 qpair failed and we were unable to recover it. 00:29:35.284 [2024-07-15 21:45:24.972693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.284 [2024-07-15 21:45:24.972776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.284 [2024-07-15 21:45:24.972790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.284 [2024-07-15 21:45:24.972798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.284 [2024-07-15 21:45:24.972802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.285 [2024-07-15 21:45:24.972815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.285 qpair failed and we were unable to recover it. 00:29:35.285 [2024-07-15 21:45:24.982746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.285 [2024-07-15 21:45:24.982828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.285 [2024-07-15 21:45:24.982847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.285 [2024-07-15 21:45:24.982853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.285 [2024-07-15 21:45:24.982858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.285 [2024-07-15 21:45:24.982872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.285 qpair failed and we were unable to recover it. 00:29:35.285 [2024-07-15 21:45:24.992696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.285 [2024-07-15 21:45:24.992775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.285 [2024-07-15 21:45:24.992793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.285 [2024-07-15 21:45:24.992799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.285 [2024-07-15 21:45:24.992803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.285 [2024-07-15 21:45:24.992818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.285 qpair failed and we were unable to recover it. 00:29:35.285 [2024-07-15 21:45:25.002737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.285 [2024-07-15 21:45:25.002817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.285 [2024-07-15 21:45:25.002836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.285 [2024-07-15 21:45:25.002842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.285 [2024-07-15 21:45:25.002846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.285 [2024-07-15 21:45:25.002860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.285 qpair failed and we were unable to recover it. 00:29:35.285 [2024-07-15 21:45:25.012830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.285 [2024-07-15 21:45:25.013041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.285 [2024-07-15 21:45:25.013054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.285 [2024-07-15 21:45:25.013059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.285 [2024-07-15 21:45:25.013066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.285 [2024-07-15 21:45:25.013078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.285 qpair failed and we were unable to recover it. 00:29:35.285 [2024-07-15 21:45:25.022837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.285 [2024-07-15 21:45:25.022907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.285 [2024-07-15 21:45:25.022920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.285 [2024-07-15 21:45:25.022925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.285 [2024-07-15 21:45:25.022929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.285 [2024-07-15 21:45:25.022939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.285 qpair failed and we were unable to recover it. 00:29:35.285 [2024-07-15 21:45:25.032873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.285 [2024-07-15 21:45:25.032947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.285 [2024-07-15 21:45:25.032959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.285 [2024-07-15 21:45:25.032964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.285 [2024-07-15 21:45:25.032968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.285 [2024-07-15 21:45:25.032979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.285 qpair failed and we were unable to recover it. 00:29:35.285 [2024-07-15 21:45:25.042926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.285 [2024-07-15 21:45:25.043040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.285 [2024-07-15 21:45:25.043052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.285 [2024-07-15 21:45:25.043058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.285 [2024-07-15 21:45:25.043062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.285 [2024-07-15 21:45:25.043072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.285 qpair failed and we were unable to recover it. 00:29:35.285 [2024-07-15 21:45:25.052823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.285 [2024-07-15 21:45:25.052893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.285 [2024-07-15 21:45:25.052905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.285 [2024-07-15 21:45:25.052910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.285 [2024-07-15 21:45:25.052914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.285 [2024-07-15 21:45:25.052924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.285 qpair failed and we were unable to recover it. 00:29:35.285 [2024-07-15 21:45:25.062947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.285 [2024-07-15 21:45:25.063018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.285 [2024-07-15 21:45:25.063030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.285 [2024-07-15 21:45:25.063035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.285 [2024-07-15 21:45:25.063039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.285 [2024-07-15 21:45:25.063050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.285 qpair failed and we were unable to recover it. 00:29:35.285 [2024-07-15 21:45:25.072994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.285 [2024-07-15 21:45:25.073066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.285 [2024-07-15 21:45:25.073078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.285 [2024-07-15 21:45:25.073083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.285 [2024-07-15 21:45:25.073087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.285 [2024-07-15 21:45:25.073097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.285 qpair failed and we were unable to recover it. 00:29:35.285 [2024-07-15 21:45:25.083084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.285 [2024-07-15 21:45:25.083161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.285 [2024-07-15 21:45:25.083173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.285 [2024-07-15 21:45:25.083178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.285 [2024-07-15 21:45:25.083182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.285 [2024-07-15 21:45:25.083193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.285 qpair failed and we were unable to recover it. 00:29:35.548 [2024-07-15 21:45:25.093049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.548 [2024-07-15 21:45:25.093151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.548 [2024-07-15 21:45:25.093163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.548 [2024-07-15 21:45:25.093168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.548 [2024-07-15 21:45:25.093172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.548 [2024-07-15 21:45:25.093183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.548 qpair failed and we were unable to recover it. 00:29:35.548 [2024-07-15 21:45:25.103049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.548 [2024-07-15 21:45:25.103117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.548 [2024-07-15 21:45:25.103132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.548 [2024-07-15 21:45:25.103140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.548 [2024-07-15 21:45:25.103144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.548 [2024-07-15 21:45:25.103154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.548 qpair failed and we were unable to recover it. 00:29:35.548 [2024-07-15 21:45:25.112976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.548 [2024-07-15 21:45:25.113051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.548 [2024-07-15 21:45:25.113063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.548 [2024-07-15 21:45:25.113067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.548 [2024-07-15 21:45:25.113071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.548 [2024-07-15 21:45:25.113082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.548 qpair failed and we were unable to recover it. 00:29:35.548 [2024-07-15 21:45:25.123126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.548 [2024-07-15 21:45:25.123196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.548 [2024-07-15 21:45:25.123209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.548 [2024-07-15 21:45:25.123213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.548 [2024-07-15 21:45:25.123218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.548 [2024-07-15 21:45:25.123228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.548 qpair failed and we were unable to recover it. 00:29:35.548 [2024-07-15 21:45:25.133055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.548 [2024-07-15 21:45:25.133131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.548 [2024-07-15 21:45:25.133143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.548 [2024-07-15 21:45:25.133148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.548 [2024-07-15 21:45:25.133152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.548 [2024-07-15 21:45:25.133163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.548 qpair failed and we were unable to recover it. 00:29:35.548 [2024-07-15 21:45:25.143184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.548 [2024-07-15 21:45:25.143257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.548 [2024-07-15 21:45:25.143270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.548 [2024-07-15 21:45:25.143275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.548 [2024-07-15 21:45:25.143279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.548 [2024-07-15 21:45:25.143290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.548 qpair failed and we were unable to recover it. 00:29:35.548 [2024-07-15 21:45:25.153184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.548 [2024-07-15 21:45:25.153256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.548 [2024-07-15 21:45:25.153269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.548 [2024-07-15 21:45:25.153274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.548 [2024-07-15 21:45:25.153278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.548 [2024-07-15 21:45:25.153288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.548 qpair failed and we were unable to recover it. 00:29:35.548 [2024-07-15 21:45:25.163208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.548 [2024-07-15 21:45:25.163276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.548 [2024-07-15 21:45:25.163287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.548 [2024-07-15 21:45:25.163292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.548 [2024-07-15 21:45:25.163296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.548 [2024-07-15 21:45:25.163307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.548 qpair failed and we were unable to recover it. 00:29:35.548 [2024-07-15 21:45:25.173180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.548 [2024-07-15 21:45:25.173249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.548 [2024-07-15 21:45:25.173260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.548 [2024-07-15 21:45:25.173265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.548 [2024-07-15 21:45:25.173269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.548 [2024-07-15 21:45:25.173280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.549 qpair failed and we were unable to recover it. 00:29:35.549 [2024-07-15 21:45:25.183296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.549 [2024-07-15 21:45:25.183377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.549 [2024-07-15 21:45:25.183389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.549 [2024-07-15 21:45:25.183395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.549 [2024-07-15 21:45:25.183399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.549 [2024-07-15 21:45:25.183410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.549 qpair failed and we were unable to recover it. 00:29:35.549 [2024-07-15 21:45:25.193314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.549 [2024-07-15 21:45:25.193402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.549 [2024-07-15 21:45:25.193414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.549 [2024-07-15 21:45:25.193422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.549 [2024-07-15 21:45:25.193426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.549 [2024-07-15 21:45:25.193437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.549 qpair failed and we were unable to recover it. 00:29:35.549 [2024-07-15 21:45:25.203260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.549 [2024-07-15 21:45:25.203329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.549 [2024-07-15 21:45:25.203341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.549 [2024-07-15 21:45:25.203346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.549 [2024-07-15 21:45:25.203350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.549 [2024-07-15 21:45:25.203360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.549 qpair failed and we were unable to recover it. 00:29:35.549 [2024-07-15 21:45:25.213330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.549 [2024-07-15 21:45:25.213391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.549 [2024-07-15 21:45:25.213403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.549 [2024-07-15 21:45:25.213407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.549 [2024-07-15 21:45:25.213411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.549 [2024-07-15 21:45:25.213422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.549 qpair failed and we were unable to recover it. 00:29:35.549 [2024-07-15 21:45:25.223381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.549 [2024-07-15 21:45:25.223590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.549 [2024-07-15 21:45:25.223602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.549 [2024-07-15 21:45:25.223607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.549 [2024-07-15 21:45:25.223611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.549 [2024-07-15 21:45:25.223621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.549 qpair failed and we were unable to recover it. 00:29:35.549 [2024-07-15 21:45:25.233463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.549 [2024-07-15 21:45:25.233542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.549 [2024-07-15 21:45:25.233553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.549 [2024-07-15 21:45:25.233558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.549 [2024-07-15 21:45:25.233562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.549 [2024-07-15 21:45:25.233572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.549 qpair failed and we were unable to recover it. 00:29:35.549 [2024-07-15 21:45:25.243455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.549 [2024-07-15 21:45:25.243528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.549 [2024-07-15 21:45:25.243540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.549 [2024-07-15 21:45:25.243545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.549 [2024-07-15 21:45:25.243549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.549 [2024-07-15 21:45:25.243559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.549 qpair failed and we were unable to recover it. 00:29:35.549 [2024-07-15 21:45:25.253491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.549 [2024-07-15 21:45:25.253600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.549 [2024-07-15 21:45:25.253612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.549 [2024-07-15 21:45:25.253617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.549 [2024-07-15 21:45:25.253621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.549 [2024-07-15 21:45:25.253632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.549 qpair failed and we were unable to recover it. 00:29:35.549 [2024-07-15 21:45:25.263499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.549 [2024-07-15 21:45:25.263577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.549 [2024-07-15 21:45:25.263589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.549 [2024-07-15 21:45:25.263594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.549 [2024-07-15 21:45:25.263598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.549 [2024-07-15 21:45:25.263609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.549 qpair failed and we were unable to recover it. 00:29:35.549 [2024-07-15 21:45:25.273537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.549 [2024-07-15 21:45:25.273611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.549 [2024-07-15 21:45:25.273623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.549 [2024-07-15 21:45:25.273628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.549 [2024-07-15 21:45:25.273632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.549 [2024-07-15 21:45:25.273642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.549 qpair failed and we were unable to recover it. 00:29:35.549 [2024-07-15 21:45:25.283555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.549 [2024-07-15 21:45:25.283623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.549 [2024-07-15 21:45:25.283638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.549 [2024-07-15 21:45:25.283643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.549 [2024-07-15 21:45:25.283647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.549 [2024-07-15 21:45:25.283658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.549 qpair failed and we were unable to recover it. 00:29:35.549 [2024-07-15 21:45:25.293477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.549 [2024-07-15 21:45:25.293540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.549 [2024-07-15 21:45:25.293553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.549 [2024-07-15 21:45:25.293558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.549 [2024-07-15 21:45:25.293562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.549 [2024-07-15 21:45:25.293573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.549 qpair failed and we were unable to recover it. 00:29:35.549 [2024-07-15 21:45:25.303611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.549 [2024-07-15 21:45:25.303681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.549 [2024-07-15 21:45:25.303694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.549 [2024-07-15 21:45:25.303698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.549 [2024-07-15 21:45:25.303703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.549 [2024-07-15 21:45:25.303713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.549 qpair failed and we were unable to recover it. 00:29:35.549 [2024-07-15 21:45:25.313622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.549 [2024-07-15 21:45:25.313701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.549 [2024-07-15 21:45:25.313713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.549 [2024-07-15 21:45:25.313718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.549 [2024-07-15 21:45:25.313722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.549 [2024-07-15 21:45:25.313733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.549 qpair failed and we were unable to recover it. 00:29:35.549 [2024-07-15 21:45:25.323606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.550 [2024-07-15 21:45:25.323679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.550 [2024-07-15 21:45:25.323691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.550 [2024-07-15 21:45:25.323696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.550 [2024-07-15 21:45:25.323700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.550 [2024-07-15 21:45:25.323713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.550 qpair failed and we were unable to recover it. 00:29:35.550 [2024-07-15 21:45:25.333718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.550 [2024-07-15 21:45:25.333792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.550 [2024-07-15 21:45:25.333805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.550 [2024-07-15 21:45:25.333810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.550 [2024-07-15 21:45:25.333814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.550 [2024-07-15 21:45:25.333824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.550 qpair failed and we were unable to recover it. 00:29:35.550 [2024-07-15 21:45:25.343745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.550 [2024-07-15 21:45:25.343818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.550 [2024-07-15 21:45:25.343837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.550 [2024-07-15 21:45:25.343843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.550 [2024-07-15 21:45:25.343848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.550 [2024-07-15 21:45:25.343862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.550 qpair failed and we were unable to recover it. 00:29:35.812 [2024-07-15 21:45:25.353779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.812 [2024-07-15 21:45:25.353870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.812 [2024-07-15 21:45:25.353888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.812 [2024-07-15 21:45:25.353894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.812 [2024-07-15 21:45:25.353899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.812 [2024-07-15 21:45:25.353913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.812 qpair failed and we were unable to recover it. 00:29:35.812 [2024-07-15 21:45:25.363799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.812 [2024-07-15 21:45:25.363878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.812 [2024-07-15 21:45:25.363896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.812 [2024-07-15 21:45:25.363902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.812 [2024-07-15 21:45:25.363907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.812 [2024-07-15 21:45:25.363921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.812 qpair failed and we were unable to recover it. 00:29:35.812 [2024-07-15 21:45:25.373793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.812 [2024-07-15 21:45:25.373877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.812 [2024-07-15 21:45:25.373899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.812 [2024-07-15 21:45:25.373905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.812 [2024-07-15 21:45:25.373909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.812 [2024-07-15 21:45:25.373923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.812 qpair failed and we were unable to recover it. 00:29:35.812 [2024-07-15 21:45:25.383835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.812 [2024-07-15 21:45:25.383911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.812 [2024-07-15 21:45:25.383929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.812 [2024-07-15 21:45:25.383935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.812 [2024-07-15 21:45:25.383940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.812 [2024-07-15 21:45:25.383954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.812 qpair failed and we were unable to recover it. 00:29:35.812 [2024-07-15 21:45:25.393871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.812 [2024-07-15 21:45:25.393943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.812 [2024-07-15 21:45:25.393956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.812 [2024-07-15 21:45:25.393961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.812 [2024-07-15 21:45:25.393965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.812 [2024-07-15 21:45:25.393976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.812 qpair failed and we were unable to recover it. 00:29:35.812 [2024-07-15 21:45:25.403896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.812 [2024-07-15 21:45:25.403962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.812 [2024-07-15 21:45:25.403974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.812 [2024-07-15 21:45:25.403979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.812 [2024-07-15 21:45:25.403984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.812 [2024-07-15 21:45:25.403994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.812 qpair failed and we were unable to recover it. 00:29:35.812 [2024-07-15 21:45:25.413965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.812 [2024-07-15 21:45:25.414040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.812 [2024-07-15 21:45:25.414052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.812 [2024-07-15 21:45:25.414058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.812 [2024-07-15 21:45:25.414066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.812 [2024-07-15 21:45:25.414077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.812 qpair failed and we were unable to recover it. 00:29:35.813 [2024-07-15 21:45:25.423947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.813 [2024-07-15 21:45:25.424019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.813 [2024-07-15 21:45:25.424032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.813 [2024-07-15 21:45:25.424036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.813 [2024-07-15 21:45:25.424041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.813 [2024-07-15 21:45:25.424051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.813 qpair failed and we were unable to recover it. 00:29:35.813 [2024-07-15 21:45:25.433996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.813 [2024-07-15 21:45:25.434076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.813 [2024-07-15 21:45:25.434088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.813 [2024-07-15 21:45:25.434093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.813 [2024-07-15 21:45:25.434097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.813 [2024-07-15 21:45:25.434108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.813 qpair failed and we were unable to recover it. 00:29:35.813 [2024-07-15 21:45:25.444066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.813 [2024-07-15 21:45:25.444142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.813 [2024-07-15 21:45:25.444155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.813 [2024-07-15 21:45:25.444160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.813 [2024-07-15 21:45:25.444164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.813 [2024-07-15 21:45:25.444175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.813 qpair failed and we were unable to recover it. 00:29:35.813 [2024-07-15 21:45:25.454048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.813 [2024-07-15 21:45:25.454116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.813 [2024-07-15 21:45:25.454132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.813 [2024-07-15 21:45:25.454137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.813 [2024-07-15 21:45:25.454141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.813 [2024-07-15 21:45:25.454152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.813 qpair failed and we were unable to recover it. 00:29:35.813 [2024-07-15 21:45:25.464063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.813 [2024-07-15 21:45:25.464139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.813 [2024-07-15 21:45:25.464151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.813 [2024-07-15 21:45:25.464156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.813 [2024-07-15 21:45:25.464160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.813 [2024-07-15 21:45:25.464171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.813 qpair failed and we were unable to recover it. 00:29:35.813 [2024-07-15 21:45:25.474098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.813 [2024-07-15 21:45:25.474178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.813 [2024-07-15 21:45:25.474190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.813 [2024-07-15 21:45:25.474195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.813 [2024-07-15 21:45:25.474199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.813 [2024-07-15 21:45:25.474210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.813 qpair failed and we were unable to recover it. 00:29:35.813 [2024-07-15 21:45:25.484088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.813 [2024-07-15 21:45:25.484159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.813 [2024-07-15 21:45:25.484171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.813 [2024-07-15 21:45:25.484176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.813 [2024-07-15 21:45:25.484180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.813 [2024-07-15 21:45:25.484191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.813 qpair failed and we were unable to recover it. 00:29:35.813 [2024-07-15 21:45:25.494152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.813 [2024-07-15 21:45:25.494218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.813 [2024-07-15 21:45:25.494230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.813 [2024-07-15 21:45:25.494235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.813 [2024-07-15 21:45:25.494239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.813 [2024-07-15 21:45:25.494250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.813 qpair failed and we were unable to recover it. 00:29:35.813 [2024-07-15 21:45:25.504174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.813 [2024-07-15 21:45:25.504245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.813 [2024-07-15 21:45:25.504258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.813 [2024-07-15 21:45:25.504266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.813 [2024-07-15 21:45:25.504270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.813 [2024-07-15 21:45:25.504281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.813 qpair failed and we were unable to recover it. 00:29:35.813 [2024-07-15 21:45:25.514230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.813 [2024-07-15 21:45:25.514302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.813 [2024-07-15 21:45:25.514314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.813 [2024-07-15 21:45:25.514319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.813 [2024-07-15 21:45:25.514323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.813 [2024-07-15 21:45:25.514334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.813 qpair failed and we were unable to recover it. 00:29:35.813 [2024-07-15 21:45:25.524228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.813 [2024-07-15 21:45:25.524305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.813 [2024-07-15 21:45:25.524317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.813 [2024-07-15 21:45:25.524322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.813 [2024-07-15 21:45:25.524326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.813 [2024-07-15 21:45:25.524337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.813 qpair failed and we were unable to recover it. 00:29:35.813 [2024-07-15 21:45:25.534266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.813 [2024-07-15 21:45:25.534338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.813 [2024-07-15 21:45:25.534350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.813 [2024-07-15 21:45:25.534355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.813 [2024-07-15 21:45:25.534359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.813 [2024-07-15 21:45:25.534370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.813 qpair failed and we were unable to recover it. 00:29:35.813 [2024-07-15 21:45:25.544327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.813 [2024-07-15 21:45:25.544402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.813 [2024-07-15 21:45:25.544414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.813 [2024-07-15 21:45:25.544419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.813 [2024-07-15 21:45:25.544423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.813 [2024-07-15 21:45:25.544434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.813 qpair failed and we were unable to recover it. 00:29:35.814 [2024-07-15 21:45:25.554328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.814 [2024-07-15 21:45:25.554400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.814 [2024-07-15 21:45:25.554412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.814 [2024-07-15 21:45:25.554417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.814 [2024-07-15 21:45:25.554421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.814 [2024-07-15 21:45:25.554431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.814 qpair failed and we were unable to recover it. 00:29:35.814 [2024-07-15 21:45:25.564352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.814 [2024-07-15 21:45:25.564422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.814 [2024-07-15 21:45:25.564434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.814 [2024-07-15 21:45:25.564439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.814 [2024-07-15 21:45:25.564443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.814 [2024-07-15 21:45:25.564453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.814 qpair failed and we were unable to recover it. 00:29:35.814 [2024-07-15 21:45:25.574384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.814 [2024-07-15 21:45:25.574450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.814 [2024-07-15 21:45:25.574462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.814 [2024-07-15 21:45:25.574467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.814 [2024-07-15 21:45:25.574471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.814 [2024-07-15 21:45:25.574481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.814 qpair failed and we were unable to recover it. 00:29:35.814 [2024-07-15 21:45:25.584431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.814 [2024-07-15 21:45:25.584505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.814 [2024-07-15 21:45:25.584517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.814 [2024-07-15 21:45:25.584521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.814 [2024-07-15 21:45:25.584526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.814 [2024-07-15 21:45:25.584536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.814 qpair failed and we were unable to recover it. 00:29:35.814 [2024-07-15 21:45:25.594444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.814 [2024-07-15 21:45:25.594520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.814 [2024-07-15 21:45:25.594531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.814 [2024-07-15 21:45:25.594539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.814 [2024-07-15 21:45:25.594543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.814 [2024-07-15 21:45:25.594554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.814 qpair failed and we were unable to recover it. 00:29:35.814 [2024-07-15 21:45:25.604362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.814 [2024-07-15 21:45:25.604431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.814 [2024-07-15 21:45:25.604443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.814 [2024-07-15 21:45:25.604448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.814 [2024-07-15 21:45:25.604452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.814 [2024-07-15 21:45:25.604463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.814 qpair failed and we were unable to recover it. 00:29:35.814 [2024-07-15 21:45:25.614450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.814 [2024-07-15 21:45:25.614521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.814 [2024-07-15 21:45:25.614533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.814 [2024-07-15 21:45:25.614538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.814 [2024-07-15 21:45:25.614542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:35.814 [2024-07-15 21:45:25.614553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.814 qpair failed and we were unable to recover it. 00:29:36.077 [2024-07-15 21:45:25.624559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.077 [2024-07-15 21:45:25.624631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.077 [2024-07-15 21:45:25.624643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.077 [2024-07-15 21:45:25.624648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.077 [2024-07-15 21:45:25.624652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.077 [2024-07-15 21:45:25.624663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.077 qpair failed and we were unable to recover it. 00:29:36.077 [2024-07-15 21:45:25.634575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.077 [2024-07-15 21:45:25.634648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.077 [2024-07-15 21:45:25.634660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.077 [2024-07-15 21:45:25.634665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.077 [2024-07-15 21:45:25.634669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.077 [2024-07-15 21:45:25.634679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.077 qpair failed and we were unable to recover it. 00:29:36.077 [2024-07-15 21:45:25.644594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.077 [2024-07-15 21:45:25.644663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.077 [2024-07-15 21:45:25.644675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.077 [2024-07-15 21:45:25.644680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.077 [2024-07-15 21:45:25.644684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.077 [2024-07-15 21:45:25.644695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.077 qpair failed and we were unable to recover it. 00:29:36.077 [2024-07-15 21:45:25.654624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.077 [2024-07-15 21:45:25.654693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.077 [2024-07-15 21:45:25.654705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.077 [2024-07-15 21:45:25.654710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.077 [2024-07-15 21:45:25.654714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.077 [2024-07-15 21:45:25.654725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.077 qpair failed and we were unable to recover it. 00:29:36.077 [2024-07-15 21:45:25.664666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.077 [2024-07-15 21:45:25.664736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.077 [2024-07-15 21:45:25.664748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.077 [2024-07-15 21:45:25.664753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.077 [2024-07-15 21:45:25.664757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.077 [2024-07-15 21:45:25.664768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.077 qpair failed and we were unable to recover it. 00:29:36.077 [2024-07-15 21:45:25.674665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.077 [2024-07-15 21:45:25.674744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.077 [2024-07-15 21:45:25.674763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.077 [2024-07-15 21:45:25.674769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.077 [2024-07-15 21:45:25.674773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.077 [2024-07-15 21:45:25.674787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.077 qpair failed and we were unable to recover it. 00:29:36.077 [2024-07-15 21:45:25.684698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.077 [2024-07-15 21:45:25.684772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.077 [2024-07-15 21:45:25.684793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.077 [2024-07-15 21:45:25.684800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.077 [2024-07-15 21:45:25.684804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.077 [2024-07-15 21:45:25.684818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.077 qpair failed and we were unable to recover it. 00:29:36.077 [2024-07-15 21:45:25.694756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.077 [2024-07-15 21:45:25.694851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.077 [2024-07-15 21:45:25.694866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.077 [2024-07-15 21:45:25.694871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.077 [2024-07-15 21:45:25.694876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.077 [2024-07-15 21:45:25.694888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.077 qpair failed and we were unable to recover it. 00:29:36.077 [2024-07-15 21:45:25.704760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.077 [2024-07-15 21:45:25.704847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.077 [2024-07-15 21:45:25.704860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.077 [2024-07-15 21:45:25.704865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.077 [2024-07-15 21:45:25.704869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.077 [2024-07-15 21:45:25.704880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.077 qpair failed and we were unable to recover it. 00:29:36.077 [2024-07-15 21:45:25.714895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.077 [2024-07-15 21:45:25.714975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.077 [2024-07-15 21:45:25.714993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.077 [2024-07-15 21:45:25.714999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.077 [2024-07-15 21:45:25.715004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.077 [2024-07-15 21:45:25.715018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.077 qpair failed and we were unable to recover it. 00:29:36.077 [2024-07-15 21:45:25.724846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.077 [2024-07-15 21:45:25.724966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.077 [2024-07-15 21:45:25.724979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.077 [2024-07-15 21:45:25.724984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.077 [2024-07-15 21:45:25.724988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.078 [2024-07-15 21:45:25.725003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.078 qpair failed and we were unable to recover it. 00:29:36.078 [2024-07-15 21:45:25.734832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.078 [2024-07-15 21:45:25.734902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.078 [2024-07-15 21:45:25.734916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.078 [2024-07-15 21:45:25.734922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.078 [2024-07-15 21:45:25.734927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.078 [2024-07-15 21:45:25.734938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.078 qpair failed and we were unable to recover it. 00:29:36.078 [2024-07-15 21:45:25.744898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.078 [2024-07-15 21:45:25.745007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.078 [2024-07-15 21:45:25.745019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.078 [2024-07-15 21:45:25.745024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.078 [2024-07-15 21:45:25.745029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.078 [2024-07-15 21:45:25.745039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.078 qpair failed and we were unable to recover it. 00:29:36.078 [2024-07-15 21:45:25.754905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.078 [2024-07-15 21:45:25.754979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.078 [2024-07-15 21:45:25.754991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.078 [2024-07-15 21:45:25.754996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.078 [2024-07-15 21:45:25.755000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.078 [2024-07-15 21:45:25.755011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.078 qpair failed and we were unable to recover it. 00:29:36.078 [2024-07-15 21:45:25.764964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.078 [2024-07-15 21:45:25.765066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.078 [2024-07-15 21:45:25.765078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.078 [2024-07-15 21:45:25.765083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.078 [2024-07-15 21:45:25.765087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.078 [2024-07-15 21:45:25.765097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.078 qpair failed and we were unable to recover it. 00:29:36.078 [2024-07-15 21:45:25.774959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.078 [2024-07-15 21:45:25.775028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.078 [2024-07-15 21:45:25.775043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.078 [2024-07-15 21:45:25.775050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.078 [2024-07-15 21:45:25.775054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.078 [2024-07-15 21:45:25.775066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.078 qpair failed and we were unable to recover it. 00:29:36.078 [2024-07-15 21:45:25.784883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.078 [2024-07-15 21:45:25.784954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.078 [2024-07-15 21:45:25.784967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.078 [2024-07-15 21:45:25.784972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.078 [2024-07-15 21:45:25.784976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.078 [2024-07-15 21:45:25.784986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.078 qpair failed and we were unable to recover it. 00:29:36.078 [2024-07-15 21:45:25.794981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.078 [2024-07-15 21:45:25.795090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.078 [2024-07-15 21:45:25.795102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.078 [2024-07-15 21:45:25.795107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.078 [2024-07-15 21:45:25.795112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.078 [2024-07-15 21:45:25.795127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.078 qpair failed and we were unable to recover it. 00:29:36.078 [2024-07-15 21:45:25.805043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.078 [2024-07-15 21:45:25.805112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.078 [2024-07-15 21:45:25.805128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.078 [2024-07-15 21:45:25.805133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.078 [2024-07-15 21:45:25.805137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.078 [2024-07-15 21:45:25.805148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.078 qpair failed and we were unable to recover it. 00:29:36.078 [2024-07-15 21:45:25.815097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.078 [2024-07-15 21:45:25.815168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.078 [2024-07-15 21:45:25.815180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.078 [2024-07-15 21:45:25.815186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.078 [2024-07-15 21:45:25.815194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.078 [2024-07-15 21:45:25.815205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.078 qpair failed and we were unable to recover it. 00:29:36.078 [2024-07-15 21:45:25.825094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.078 [2024-07-15 21:45:25.825168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.078 [2024-07-15 21:45:25.825180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.078 [2024-07-15 21:45:25.825185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.078 [2024-07-15 21:45:25.825189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.078 [2024-07-15 21:45:25.825200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.078 qpair failed and we were unable to recover it. 00:29:36.078 [2024-07-15 21:45:25.835158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.078 [2024-07-15 21:45:25.835259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.078 [2024-07-15 21:45:25.835271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.078 [2024-07-15 21:45:25.835276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.078 [2024-07-15 21:45:25.835281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.078 [2024-07-15 21:45:25.835291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.078 qpair failed and we were unable to recover it. 00:29:36.078 [2024-07-15 21:45:25.845141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.078 [2024-07-15 21:45:25.845211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.078 [2024-07-15 21:45:25.845223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.078 [2024-07-15 21:45:25.845228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.078 [2024-07-15 21:45:25.845233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.078 [2024-07-15 21:45:25.845243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.078 qpair failed and we were unable to recover it. 00:29:36.078 [2024-07-15 21:45:25.855180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.078 [2024-07-15 21:45:25.855246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.079 [2024-07-15 21:45:25.855258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.079 [2024-07-15 21:45:25.855262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.079 [2024-07-15 21:45:25.855267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.079 [2024-07-15 21:45:25.855277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.079 qpair failed and we were unable to recover it. 00:29:36.079 [2024-07-15 21:45:25.865301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.079 [2024-07-15 21:45:25.865397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.079 [2024-07-15 21:45:25.865412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.079 [2024-07-15 21:45:25.865418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.079 [2024-07-15 21:45:25.865422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.079 [2024-07-15 21:45:25.865433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.079 qpair failed and we were unable to recover it. 00:29:36.079 [2024-07-15 21:45:25.875314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.079 [2024-07-15 21:45:25.875391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.079 [2024-07-15 21:45:25.875403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.079 [2024-07-15 21:45:25.875408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.079 [2024-07-15 21:45:25.875413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.079 [2024-07-15 21:45:25.875424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.079 qpair failed and we were unable to recover it. 00:29:36.341 [2024-07-15 21:45:25.885306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.341 [2024-07-15 21:45:25.885419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.341 [2024-07-15 21:45:25.885432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.341 [2024-07-15 21:45:25.885437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.341 [2024-07-15 21:45:25.885441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.341 [2024-07-15 21:45:25.885452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.341 qpair failed and we were unable to recover it. 00:29:36.341 [2024-07-15 21:45:25.895304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.341 [2024-07-15 21:45:25.895373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.341 [2024-07-15 21:45:25.895385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.341 [2024-07-15 21:45:25.895390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.342 [2024-07-15 21:45:25.895394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.342 [2024-07-15 21:45:25.895405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.342 qpair failed and we were unable to recover it. 00:29:36.342 [2024-07-15 21:45:25.905348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.342 [2024-07-15 21:45:25.905422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.342 [2024-07-15 21:45:25.905434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.342 [2024-07-15 21:45:25.905439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.342 [2024-07-15 21:45:25.905450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.342 [2024-07-15 21:45:25.905460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.342 qpair failed and we were unable to recover it. 00:29:36.342 [2024-07-15 21:45:25.915369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.342 [2024-07-15 21:45:25.915450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.342 [2024-07-15 21:45:25.915462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.342 [2024-07-15 21:45:25.915467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.342 [2024-07-15 21:45:25.915471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.342 [2024-07-15 21:45:25.915482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.342 qpair failed and we were unable to recover it. 00:29:36.342 [2024-07-15 21:45:25.925367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.342 [2024-07-15 21:45:25.925435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.342 [2024-07-15 21:45:25.925447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.342 [2024-07-15 21:45:25.925452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.342 [2024-07-15 21:45:25.925456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.342 [2024-07-15 21:45:25.925466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.342 qpair failed and we were unable to recover it. 00:29:36.342 [2024-07-15 21:45:25.935436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.342 [2024-07-15 21:45:25.935522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.342 [2024-07-15 21:45:25.935534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.342 [2024-07-15 21:45:25.935539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.342 [2024-07-15 21:45:25.935543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.342 [2024-07-15 21:45:25.935554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.342 qpair failed and we were unable to recover it. 00:29:36.342 [2024-07-15 21:45:25.945378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.342 [2024-07-15 21:45:25.945448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.342 [2024-07-15 21:45:25.945461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.342 [2024-07-15 21:45:25.945466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.342 [2024-07-15 21:45:25.945470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.342 [2024-07-15 21:45:25.945481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.342 qpair failed and we were unable to recover it. 00:29:36.342 [2024-07-15 21:45:25.955445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.342 [2024-07-15 21:45:25.955519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.342 [2024-07-15 21:45:25.955531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.342 [2024-07-15 21:45:25.955536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.342 [2024-07-15 21:45:25.955540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.342 [2024-07-15 21:45:25.955550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.342 qpair failed and we were unable to recover it. 00:29:36.342 [2024-07-15 21:45:25.965480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.342 [2024-07-15 21:45:25.965559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.342 [2024-07-15 21:45:25.965571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.342 [2024-07-15 21:45:25.965576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.342 [2024-07-15 21:45:25.965580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.342 [2024-07-15 21:45:25.965590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.342 qpair failed and we were unable to recover it. 00:29:36.342 [2024-07-15 21:45:25.975515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.342 [2024-07-15 21:45:25.975583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.342 [2024-07-15 21:45:25.975595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.342 [2024-07-15 21:45:25.975600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.342 [2024-07-15 21:45:25.975604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.342 [2024-07-15 21:45:25.975615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.342 qpair failed and we were unable to recover it. 00:29:36.342 [2024-07-15 21:45:25.985519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.342 [2024-07-15 21:45:25.985589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.342 [2024-07-15 21:45:25.985602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.342 [2024-07-15 21:45:25.985606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.342 [2024-07-15 21:45:25.985611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.342 [2024-07-15 21:45:25.985621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.342 qpair failed and we were unable to recover it. 00:29:36.342 [2024-07-15 21:45:25.995572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.342 [2024-07-15 21:45:25.995644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.342 [2024-07-15 21:45:25.995657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.342 [2024-07-15 21:45:25.995665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.342 [2024-07-15 21:45:25.995669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.342 [2024-07-15 21:45:25.995679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.342 qpair failed and we were unable to recover it. 00:29:36.342 [2024-07-15 21:45:26.005599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.342 [2024-07-15 21:45:26.005666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.342 [2024-07-15 21:45:26.005679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.342 [2024-07-15 21:45:26.005683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.342 [2024-07-15 21:45:26.005688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.342 [2024-07-15 21:45:26.005698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.342 qpair failed and we were unable to recover it. 00:29:36.342 [2024-07-15 21:45:26.015652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.342 [2024-07-15 21:45:26.015717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.342 [2024-07-15 21:45:26.015729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.342 [2024-07-15 21:45:26.015734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.342 [2024-07-15 21:45:26.015738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.342 [2024-07-15 21:45:26.015748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.342 qpair failed and we were unable to recover it. 00:29:36.342 [2024-07-15 21:45:26.025653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.342 [2024-07-15 21:45:26.025721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.342 [2024-07-15 21:45:26.025733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.342 [2024-07-15 21:45:26.025738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.342 [2024-07-15 21:45:26.025742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.342 [2024-07-15 21:45:26.025752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.342 qpair failed and we were unable to recover it. 00:29:36.342 [2024-07-15 21:45:26.035667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.342 [2024-07-15 21:45:26.035742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.342 [2024-07-15 21:45:26.035761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.342 [2024-07-15 21:45:26.035767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.342 [2024-07-15 21:45:26.035772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.343 [2024-07-15 21:45:26.035786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.343 qpair failed and we were unable to recover it. 00:29:36.343 [2024-07-15 21:45:26.045704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.343 [2024-07-15 21:45:26.045777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.343 [2024-07-15 21:45:26.045795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.343 [2024-07-15 21:45:26.045801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.343 [2024-07-15 21:45:26.045806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.343 [2024-07-15 21:45:26.045820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.343 qpair failed and we were unable to recover it. 00:29:36.343 [2024-07-15 21:45:26.055731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.343 [2024-07-15 21:45:26.055801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.343 [2024-07-15 21:45:26.055820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.343 [2024-07-15 21:45:26.055826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.343 [2024-07-15 21:45:26.055830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.343 [2024-07-15 21:45:26.055845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.343 qpair failed and we were unable to recover it. 00:29:36.343 [2024-07-15 21:45:26.065719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.343 [2024-07-15 21:45:26.065785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.343 [2024-07-15 21:45:26.065804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.343 [2024-07-15 21:45:26.065810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.343 [2024-07-15 21:45:26.065814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.343 [2024-07-15 21:45:26.065828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.343 qpair failed and we were unable to recover it. 00:29:36.343 [2024-07-15 21:45:26.075790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.343 [2024-07-15 21:45:26.075870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.343 [2024-07-15 21:45:26.075889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.343 [2024-07-15 21:45:26.075895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.343 [2024-07-15 21:45:26.075899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.343 [2024-07-15 21:45:26.075913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.343 qpair failed and we were unable to recover it. 00:29:36.343 [2024-07-15 21:45:26.085844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.343 [2024-07-15 21:45:26.085908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.343 [2024-07-15 21:45:26.085925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.343 [2024-07-15 21:45:26.085930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.343 [2024-07-15 21:45:26.085935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.343 [2024-07-15 21:45:26.085946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.343 qpair failed and we were unable to recover it. 00:29:36.343 [2024-07-15 21:45:26.095835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.343 [2024-07-15 21:45:26.095902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.343 [2024-07-15 21:45:26.095914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.343 [2024-07-15 21:45:26.095919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.343 [2024-07-15 21:45:26.095924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.343 [2024-07-15 21:45:26.095935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.343 qpair failed and we were unable to recover it. 00:29:36.343 [2024-07-15 21:45:26.105838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.343 [2024-07-15 21:45:26.105899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.343 [2024-07-15 21:45:26.105912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.343 [2024-07-15 21:45:26.105917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.343 [2024-07-15 21:45:26.105921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.343 [2024-07-15 21:45:26.105932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.343 qpair failed and we were unable to recover it. 00:29:36.343 [2024-07-15 21:45:26.115889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.343 [2024-07-15 21:45:26.115958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.343 [2024-07-15 21:45:26.115970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.343 [2024-07-15 21:45:26.115975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.343 [2024-07-15 21:45:26.115979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.343 [2024-07-15 21:45:26.115990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.343 qpair failed and we were unable to recover it. 00:29:36.343 [2024-07-15 21:45:26.125925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.343 [2024-07-15 21:45:26.125993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.343 [2024-07-15 21:45:26.126006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.343 [2024-07-15 21:45:26.126011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.343 [2024-07-15 21:45:26.126015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.343 [2024-07-15 21:45:26.126028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.343 qpair failed and we were unable to recover it. 00:29:36.343 [2024-07-15 21:45:26.135967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.343 [2024-07-15 21:45:26.136031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.343 [2024-07-15 21:45:26.136043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.343 [2024-07-15 21:45:26.136048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.343 [2024-07-15 21:45:26.136052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.343 [2024-07-15 21:45:26.136063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.343 qpair failed and we were unable to recover it. 00:29:36.606 [2024-07-15 21:45:26.145933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.606 [2024-07-15 21:45:26.145997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.606 [2024-07-15 21:45:26.146010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.606 [2024-07-15 21:45:26.146016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.606 [2024-07-15 21:45:26.146020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.606 [2024-07-15 21:45:26.146031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.606 qpair failed and we were unable to recover it. 00:29:36.606 [2024-07-15 21:45:26.156019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.606 [2024-07-15 21:45:26.156094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.606 [2024-07-15 21:45:26.156107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.606 [2024-07-15 21:45:26.156112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.606 [2024-07-15 21:45:26.156116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.606 [2024-07-15 21:45:26.156130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.606 qpair failed and we were unable to recover it. 00:29:36.606 [2024-07-15 21:45:26.165920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.606 [2024-07-15 21:45:26.165982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.606 [2024-07-15 21:45:26.165994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.606 [2024-07-15 21:45:26.165998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.606 [2024-07-15 21:45:26.166003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.606 [2024-07-15 21:45:26.166013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.606 qpair failed and we were unable to recover it. 00:29:36.606 [2024-07-15 21:45:26.176070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.606 [2024-07-15 21:45:26.176166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.606 [2024-07-15 21:45:26.176181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.606 [2024-07-15 21:45:26.176187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.606 [2024-07-15 21:45:26.176191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.606 [2024-07-15 21:45:26.176202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.606 qpair failed and we were unable to recover it. 00:29:36.606 [2024-07-15 21:45:26.186059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.606 [2024-07-15 21:45:26.186124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.606 [2024-07-15 21:45:26.186137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.606 [2024-07-15 21:45:26.186142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.606 [2024-07-15 21:45:26.186146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.606 [2024-07-15 21:45:26.186157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.606 qpair failed and we were unable to recover it. 00:29:36.606 [2024-07-15 21:45:26.196118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.606 [2024-07-15 21:45:26.196182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.606 [2024-07-15 21:45:26.196194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.606 [2024-07-15 21:45:26.196198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.606 [2024-07-15 21:45:26.196203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.606 [2024-07-15 21:45:26.196213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.606 qpair failed and we were unable to recover it. 00:29:36.606 [2024-07-15 21:45:26.206135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.606 [2024-07-15 21:45:26.206201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.606 [2024-07-15 21:45:26.206213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.606 [2024-07-15 21:45:26.206218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.606 [2024-07-15 21:45:26.206222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.606 [2024-07-15 21:45:26.206233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.606 qpair failed and we were unable to recover it. 00:29:36.606 [2024-07-15 21:45:26.216055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.606 [2024-07-15 21:45:26.216119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.606 [2024-07-15 21:45:26.216135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.606 [2024-07-15 21:45:26.216140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.606 [2024-07-15 21:45:26.216147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.606 [2024-07-15 21:45:26.216158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.606 qpair failed and we were unable to recover it. 00:29:36.606 [2024-07-15 21:45:26.226188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.606 [2024-07-15 21:45:26.226362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.606 [2024-07-15 21:45:26.226375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.606 [2024-07-15 21:45:26.226379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.607 [2024-07-15 21:45:26.226384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.607 [2024-07-15 21:45:26.226394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.607 qpair failed and we were unable to recover it. 00:29:36.607 [2024-07-15 21:45:26.236180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.607 [2024-07-15 21:45:26.236245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.607 [2024-07-15 21:45:26.236257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.607 [2024-07-15 21:45:26.236262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.607 [2024-07-15 21:45:26.236266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.607 [2024-07-15 21:45:26.236277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.607 qpair failed and we were unable to recover it. 00:29:36.607 [2024-07-15 21:45:26.246152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.607 [2024-07-15 21:45:26.246211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.607 [2024-07-15 21:45:26.246223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.607 [2024-07-15 21:45:26.246228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.607 [2024-07-15 21:45:26.246233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.607 [2024-07-15 21:45:26.246244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.607 qpair failed and we were unable to recover it. 00:29:36.607 [2024-07-15 21:45:26.256275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.607 [2024-07-15 21:45:26.256339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.607 [2024-07-15 21:45:26.256351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.607 [2024-07-15 21:45:26.256356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.607 [2024-07-15 21:45:26.256360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.607 [2024-07-15 21:45:26.256370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.607 qpair failed and we were unable to recover it. 00:29:36.607 [2024-07-15 21:45:26.266260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.607 [2024-07-15 21:45:26.266329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.607 [2024-07-15 21:45:26.266341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.607 [2024-07-15 21:45:26.266346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.607 [2024-07-15 21:45:26.266350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.607 [2024-07-15 21:45:26.266360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.607 qpair failed and we were unable to recover it. 00:29:36.607 [2024-07-15 21:45:26.276292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.607 [2024-07-15 21:45:26.276356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.607 [2024-07-15 21:45:26.276368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.607 [2024-07-15 21:45:26.276373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.607 [2024-07-15 21:45:26.276377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.607 [2024-07-15 21:45:26.276387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.607 qpair failed and we were unable to recover it. 00:29:36.607 [2024-07-15 21:45:26.286356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.607 [2024-07-15 21:45:26.286449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.607 [2024-07-15 21:45:26.286461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.607 [2024-07-15 21:45:26.286466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.607 [2024-07-15 21:45:26.286470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.607 [2024-07-15 21:45:26.286481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.607 qpair failed and we were unable to recover it. 00:29:36.607 [2024-07-15 21:45:26.296408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.607 [2024-07-15 21:45:26.296579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.607 [2024-07-15 21:45:26.296592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.607 [2024-07-15 21:45:26.296597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.607 [2024-07-15 21:45:26.296601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.607 [2024-07-15 21:45:26.296611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.607 qpair failed and we were unable to recover it. 00:29:36.607 [2024-07-15 21:45:26.306410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.607 [2024-07-15 21:45:26.306485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.607 [2024-07-15 21:45:26.306497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.607 [2024-07-15 21:45:26.306502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.607 [2024-07-15 21:45:26.306509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.607 [2024-07-15 21:45:26.306520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.607 qpair failed and we were unable to recover it. 00:29:36.607 [2024-07-15 21:45:26.316398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.607 [2024-07-15 21:45:26.316462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.607 [2024-07-15 21:45:26.316475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.607 [2024-07-15 21:45:26.316479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.607 [2024-07-15 21:45:26.316484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.607 [2024-07-15 21:45:26.316494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.607 qpair failed and we were unable to recover it. 00:29:36.607 [2024-07-15 21:45:26.326408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.607 [2024-07-15 21:45:26.326478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.607 [2024-07-15 21:45:26.326490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.607 [2024-07-15 21:45:26.326495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.607 [2024-07-15 21:45:26.326499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.607 [2024-07-15 21:45:26.326510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.607 qpair failed and we were unable to recover it. 00:29:36.607 [2024-07-15 21:45:26.336466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.607 [2024-07-15 21:45:26.336532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.607 [2024-07-15 21:45:26.336544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.607 [2024-07-15 21:45:26.336549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.607 [2024-07-15 21:45:26.336553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.607 [2024-07-15 21:45:26.336563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.607 qpair failed and we were unable to recover it. 00:29:36.607 [2024-07-15 21:45:26.346486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.607 [2024-07-15 21:45:26.346550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.607 [2024-07-15 21:45:26.346562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.607 [2024-07-15 21:45:26.346567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.607 [2024-07-15 21:45:26.346572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.607 [2024-07-15 21:45:26.346582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.607 qpair failed and we were unable to recover it. 00:29:36.607 [2024-07-15 21:45:26.356541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.607 [2024-07-15 21:45:26.356608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.607 [2024-07-15 21:45:26.356620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.607 [2024-07-15 21:45:26.356625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.607 [2024-07-15 21:45:26.356629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.607 [2024-07-15 21:45:26.356640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.607 qpair failed and we were unable to recover it. 00:29:36.607 [2024-07-15 21:45:26.366512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.607 [2024-07-15 21:45:26.366578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.607 [2024-07-15 21:45:26.366590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.607 [2024-07-15 21:45:26.366595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.607 [2024-07-15 21:45:26.366599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.607 [2024-07-15 21:45:26.366610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.608 qpair failed and we were unable to recover it. 00:29:36.608 [2024-07-15 21:45:26.376548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.608 [2024-07-15 21:45:26.376610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.608 [2024-07-15 21:45:26.376622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.608 [2024-07-15 21:45:26.376626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.608 [2024-07-15 21:45:26.376630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.608 [2024-07-15 21:45:26.376641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.608 qpair failed and we were unable to recover it. 00:29:36.608 [2024-07-15 21:45:26.386589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.608 [2024-07-15 21:45:26.386660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.608 [2024-07-15 21:45:26.386672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.608 [2024-07-15 21:45:26.386677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.608 [2024-07-15 21:45:26.386681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.608 [2024-07-15 21:45:26.386691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.608 qpair failed and we were unable to recover it. 00:29:36.608 [2024-07-15 21:45:26.396607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.608 [2024-07-15 21:45:26.396677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.608 [2024-07-15 21:45:26.396689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.608 [2024-07-15 21:45:26.396697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.608 [2024-07-15 21:45:26.396702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.608 [2024-07-15 21:45:26.396712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.608 qpair failed and we were unable to recover it. 00:29:36.608 [2024-07-15 21:45:26.406668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.608 [2024-07-15 21:45:26.406727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.608 [2024-07-15 21:45:26.406739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.608 [2024-07-15 21:45:26.406744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.608 [2024-07-15 21:45:26.406748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.608 [2024-07-15 21:45:26.406759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.608 qpair failed and we were unable to recover it. 00:29:36.871 [2024-07-15 21:45:26.416736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.871 [2024-07-15 21:45:26.416806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.871 [2024-07-15 21:45:26.416820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.871 [2024-07-15 21:45:26.416825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.871 [2024-07-15 21:45:26.416829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.871 [2024-07-15 21:45:26.416840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.871 qpair failed and we were unable to recover it. 00:29:36.871 [2024-07-15 21:45:26.426729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.871 [2024-07-15 21:45:26.426795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.871 [2024-07-15 21:45:26.426808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.871 [2024-07-15 21:45:26.426813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.872 [2024-07-15 21:45:26.426817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.872 [2024-07-15 21:45:26.426828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.872 qpair failed and we were unable to recover it. 00:29:36.872 [2024-07-15 21:45:26.436791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.872 [2024-07-15 21:45:26.436869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.872 [2024-07-15 21:45:26.436881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.872 [2024-07-15 21:45:26.436886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.872 [2024-07-15 21:45:26.436890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.872 [2024-07-15 21:45:26.436901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.872 qpair failed and we were unable to recover it. 00:29:36.872 [2024-07-15 21:45:26.446784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.872 [2024-07-15 21:45:26.446881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.872 [2024-07-15 21:45:26.446893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.872 [2024-07-15 21:45:26.446898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.872 [2024-07-15 21:45:26.446902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.872 [2024-07-15 21:45:26.446913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.872 qpair failed and we were unable to recover it. 00:29:36.872 [2024-07-15 21:45:26.456762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.872 [2024-07-15 21:45:26.456824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.872 [2024-07-15 21:45:26.456837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.872 [2024-07-15 21:45:26.456842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.872 [2024-07-15 21:45:26.456846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.872 [2024-07-15 21:45:26.456857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.872 qpair failed and we were unable to recover it. 00:29:36.872 [2024-07-15 21:45:26.466821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.872 [2024-07-15 21:45:26.466884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.872 [2024-07-15 21:45:26.466897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.872 [2024-07-15 21:45:26.466901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.872 [2024-07-15 21:45:26.466906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.872 [2024-07-15 21:45:26.466916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.872 qpair failed and we were unable to recover it. 00:29:36.872 [2024-07-15 21:45:26.476826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.872 [2024-07-15 21:45:26.476890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.872 [2024-07-15 21:45:26.476902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.872 [2024-07-15 21:45:26.476907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.872 [2024-07-15 21:45:26.476912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.872 [2024-07-15 21:45:26.476922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.872 qpair failed and we were unable to recover it. 00:29:36.872 [2024-07-15 21:45:26.486882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.872 [2024-07-15 21:45:26.486945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.872 [2024-07-15 21:45:26.486961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.872 [2024-07-15 21:45:26.486966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.872 [2024-07-15 21:45:26.486970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.872 [2024-07-15 21:45:26.486980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.872 qpair failed and we were unable to recover it. 00:29:36.872 [2024-07-15 21:45:26.496843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.872 [2024-07-15 21:45:26.496907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.872 [2024-07-15 21:45:26.496925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.872 [2024-07-15 21:45:26.496931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.872 [2024-07-15 21:45:26.496936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.872 [2024-07-15 21:45:26.496950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.872 qpair failed and we were unable to recover it. 00:29:36.872 [2024-07-15 21:45:26.506935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.872 [2024-07-15 21:45:26.507000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.872 [2024-07-15 21:45:26.507018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.872 [2024-07-15 21:45:26.507024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.872 [2024-07-15 21:45:26.507029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.872 [2024-07-15 21:45:26.507043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.872 qpair failed and we were unable to recover it. 00:29:36.872 [2024-07-15 21:45:26.516940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.872 [2024-07-15 21:45:26.517004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.872 [2024-07-15 21:45:26.517017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.872 [2024-07-15 21:45:26.517022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.872 [2024-07-15 21:45:26.517026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.872 [2024-07-15 21:45:26.517038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.872 qpair failed and we were unable to recover it. 00:29:36.872 [2024-07-15 21:45:26.526966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.872 [2024-07-15 21:45:26.527027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.872 [2024-07-15 21:45:26.527039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.872 [2024-07-15 21:45:26.527045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.872 [2024-07-15 21:45:26.527049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.872 [2024-07-15 21:45:26.527063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.872 qpair failed and we were unable to recover it. 00:29:36.872 [2024-07-15 21:45:26.536985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.872 [2024-07-15 21:45:26.537041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.872 [2024-07-15 21:45:26.537053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.872 [2024-07-15 21:45:26.537058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.872 [2024-07-15 21:45:26.537062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.872 [2024-07-15 21:45:26.537073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.872 qpair failed and we were unable to recover it. 00:29:36.872 [2024-07-15 21:45:26.547062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.872 [2024-07-15 21:45:26.547130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.872 [2024-07-15 21:45:26.547142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.872 [2024-07-15 21:45:26.547147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.872 [2024-07-15 21:45:26.547151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.872 [2024-07-15 21:45:26.547162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.872 qpair failed and we were unable to recover it. 00:29:36.872 [2024-07-15 21:45:26.557043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.872 [2024-07-15 21:45:26.557120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.872 [2024-07-15 21:45:26.557135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.872 [2024-07-15 21:45:26.557140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.872 [2024-07-15 21:45:26.557144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.872 [2024-07-15 21:45:26.557155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.872 qpair failed and we were unable to recover it. 00:29:36.872 [2024-07-15 21:45:26.567012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.872 [2024-07-15 21:45:26.567075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.873 [2024-07-15 21:45:26.567087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.873 [2024-07-15 21:45:26.567092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.873 [2024-07-15 21:45:26.567096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.873 [2024-07-15 21:45:26.567106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.873 qpair failed and we were unable to recover it. 00:29:36.873 [2024-07-15 21:45:26.577196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.873 [2024-07-15 21:45:26.577259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.873 [2024-07-15 21:45:26.577274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.873 [2024-07-15 21:45:26.577279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.873 [2024-07-15 21:45:26.577283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.873 [2024-07-15 21:45:26.577294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.873 qpair failed and we were unable to recover it. 00:29:36.873 [2024-07-15 21:45:26.587131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.873 [2024-07-15 21:45:26.587225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.873 [2024-07-15 21:45:26.587237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.873 [2024-07-15 21:45:26.587242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.873 [2024-07-15 21:45:26.587246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.873 [2024-07-15 21:45:26.587257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.873 qpair failed and we were unable to recover it. 00:29:36.873 [2024-07-15 21:45:26.597152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.873 [2024-07-15 21:45:26.597223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.873 [2024-07-15 21:45:26.597235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.873 [2024-07-15 21:45:26.597240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.873 [2024-07-15 21:45:26.597244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.873 [2024-07-15 21:45:26.597255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.873 qpair failed and we were unable to recover it. 00:29:36.873 [2024-07-15 21:45:26.607060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.873 [2024-07-15 21:45:26.607119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.873 [2024-07-15 21:45:26.607134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.873 [2024-07-15 21:45:26.607138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.873 [2024-07-15 21:45:26.607143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.873 [2024-07-15 21:45:26.607153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.873 qpair failed and we were unable to recover it. 00:29:36.873 [2024-07-15 21:45:26.617232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.873 [2024-07-15 21:45:26.617294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.873 [2024-07-15 21:45:26.617306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.873 [2024-07-15 21:45:26.617311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.873 [2024-07-15 21:45:26.617315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.873 [2024-07-15 21:45:26.617331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.873 qpair failed and we were unable to recover it. 00:29:36.873 [2024-07-15 21:45:26.627256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.873 [2024-07-15 21:45:26.627319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.873 [2024-07-15 21:45:26.627331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.873 [2024-07-15 21:45:26.627336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.873 [2024-07-15 21:45:26.627340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.873 [2024-07-15 21:45:26.627351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.873 qpair failed and we were unable to recover it. 00:29:36.873 [2024-07-15 21:45:26.637292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.873 [2024-07-15 21:45:26.637359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.873 [2024-07-15 21:45:26.637371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.873 [2024-07-15 21:45:26.637376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.873 [2024-07-15 21:45:26.637380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.873 [2024-07-15 21:45:26.637391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.873 qpair failed and we were unable to recover it. 00:29:36.873 [2024-07-15 21:45:26.647316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.873 [2024-07-15 21:45:26.647374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.873 [2024-07-15 21:45:26.647386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.873 [2024-07-15 21:45:26.647391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.873 [2024-07-15 21:45:26.647395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.873 [2024-07-15 21:45:26.647405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.873 qpair failed and we were unable to recover it. 00:29:36.873 [2024-07-15 21:45:26.657323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.873 [2024-07-15 21:45:26.657387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.873 [2024-07-15 21:45:26.657399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.873 [2024-07-15 21:45:26.657404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.873 [2024-07-15 21:45:26.657408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.873 [2024-07-15 21:45:26.657418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.873 qpair failed and we were unable to recover it. 00:29:36.873 [2024-07-15 21:45:26.667391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.873 [2024-07-15 21:45:26.667561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.873 [2024-07-15 21:45:26.667573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.873 [2024-07-15 21:45:26.667578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.873 [2024-07-15 21:45:26.667582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:36.873 [2024-07-15 21:45:26.667592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.873 qpair failed and we were unable to recover it. 00:29:37.137 [2024-07-15 21:45:26.677371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.137 [2024-07-15 21:45:26.677439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.137 [2024-07-15 21:45:26.677451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.137 [2024-07-15 21:45:26.677456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.137 [2024-07-15 21:45:26.677460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.137 [2024-07-15 21:45:26.677471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-07-15 21:45:26.687344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.137 [2024-07-15 21:45:26.687406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.137 [2024-07-15 21:45:26.687418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.137 [2024-07-15 21:45:26.687423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.137 [2024-07-15 21:45:26.687427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.137 [2024-07-15 21:45:26.687437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-07-15 21:45:26.697422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.137 [2024-07-15 21:45:26.697481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.137 [2024-07-15 21:45:26.697493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.137 [2024-07-15 21:45:26.697498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.137 [2024-07-15 21:45:26.697502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.137 [2024-07-15 21:45:26.697513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-07-15 21:45:26.707434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.137 [2024-07-15 21:45:26.707543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.137 [2024-07-15 21:45:26.707555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.137 [2024-07-15 21:45:26.707560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.137 [2024-07-15 21:45:26.707567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.137 [2024-07-15 21:45:26.707578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-07-15 21:45:26.717465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.137 [2024-07-15 21:45:26.717529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.137 [2024-07-15 21:45:26.717541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.137 [2024-07-15 21:45:26.717546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.137 [2024-07-15 21:45:26.717550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.137 [2024-07-15 21:45:26.717560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-07-15 21:45:26.727438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.137 [2024-07-15 21:45:26.727508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.137 [2024-07-15 21:45:26.727519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.137 [2024-07-15 21:45:26.727524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.137 [2024-07-15 21:45:26.727528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.137 [2024-07-15 21:45:26.727539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-07-15 21:45:26.737518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.137 [2024-07-15 21:45:26.737579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.137 [2024-07-15 21:45:26.737591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.137 [2024-07-15 21:45:26.737596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.137 [2024-07-15 21:45:26.737600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.137 [2024-07-15 21:45:26.737611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-07-15 21:45:26.747542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.137 [2024-07-15 21:45:26.747612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.137 [2024-07-15 21:45:26.747623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.137 [2024-07-15 21:45:26.747628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.137 [2024-07-15 21:45:26.747633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.137 [2024-07-15 21:45:26.747643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-07-15 21:45:26.757564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.137 [2024-07-15 21:45:26.757631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.137 [2024-07-15 21:45:26.757644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.137 [2024-07-15 21:45:26.757649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.137 [2024-07-15 21:45:26.757654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.137 [2024-07-15 21:45:26.757664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-07-15 21:45:26.767533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.137 [2024-07-15 21:45:26.767641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.137 [2024-07-15 21:45:26.767653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.137 [2024-07-15 21:45:26.767658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.137 [2024-07-15 21:45:26.767662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.137 [2024-07-15 21:45:26.767672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-07-15 21:45:26.777627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.137 [2024-07-15 21:45:26.777688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.137 [2024-07-15 21:45:26.777700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.137 [2024-07-15 21:45:26.777705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.137 [2024-07-15 21:45:26.777709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.137 [2024-07-15 21:45:26.777720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-07-15 21:45:26.787641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.137 [2024-07-15 21:45:26.787706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.137 [2024-07-15 21:45:26.787718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.137 [2024-07-15 21:45:26.787723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.137 [2024-07-15 21:45:26.787727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.137 [2024-07-15 21:45:26.787737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-07-15 21:45:26.797699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.137 [2024-07-15 21:45:26.797763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.137 [2024-07-15 21:45:26.797776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.137 [2024-07-15 21:45:26.797784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.137 [2024-07-15 21:45:26.797788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.137 [2024-07-15 21:45:26.797800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-07-15 21:45:26.807706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.137 [2024-07-15 21:45:26.807768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.137 [2024-07-15 21:45:26.807787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.137 [2024-07-15 21:45:26.807793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.137 [2024-07-15 21:45:26.807798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.137 [2024-07-15 21:45:26.807812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-07-15 21:45:26.817734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.137 [2024-07-15 21:45:26.817802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.137 [2024-07-15 21:45:26.817820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.137 [2024-07-15 21:45:26.817826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.137 [2024-07-15 21:45:26.817830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.138 [2024-07-15 21:45:26.817845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-07-15 21:45:26.827764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.138 [2024-07-15 21:45:26.827827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.138 [2024-07-15 21:45:26.827840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.138 [2024-07-15 21:45:26.827845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.138 [2024-07-15 21:45:26.827849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.138 [2024-07-15 21:45:26.827860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-07-15 21:45:26.837790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.138 [2024-07-15 21:45:26.837851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.138 [2024-07-15 21:45:26.837864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.138 [2024-07-15 21:45:26.837869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.138 [2024-07-15 21:45:26.837873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.138 [2024-07-15 21:45:26.837884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-07-15 21:45:26.847823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.138 [2024-07-15 21:45:26.847887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.138 [2024-07-15 21:45:26.847899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.138 [2024-07-15 21:45:26.847904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.138 [2024-07-15 21:45:26.847908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.138 [2024-07-15 21:45:26.847919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-07-15 21:45:26.857843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.138 [2024-07-15 21:45:26.857907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.138 [2024-07-15 21:45:26.857926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.138 [2024-07-15 21:45:26.857932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.138 [2024-07-15 21:45:26.857936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.138 [2024-07-15 21:45:26.857950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-07-15 21:45:26.867889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.138 [2024-07-15 21:45:26.867960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.138 [2024-07-15 21:45:26.867978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.138 [2024-07-15 21:45:26.867985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.138 [2024-07-15 21:45:26.867989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.138 [2024-07-15 21:45:26.868003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-07-15 21:45:26.877902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.138 [2024-07-15 21:45:26.877970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.138 [2024-07-15 21:45:26.877984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.138 [2024-07-15 21:45:26.877989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.138 [2024-07-15 21:45:26.877993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.138 [2024-07-15 21:45:26.878004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-07-15 21:45:26.887968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.138 [2024-07-15 21:45:26.888067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.138 [2024-07-15 21:45:26.888080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.138 [2024-07-15 21:45:26.888089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.138 [2024-07-15 21:45:26.888094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.138 [2024-07-15 21:45:26.888105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-07-15 21:45:26.897956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.138 [2024-07-15 21:45:26.898021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.138 [2024-07-15 21:45:26.898033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.138 [2024-07-15 21:45:26.898038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.138 [2024-07-15 21:45:26.898042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.138 [2024-07-15 21:45:26.898053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-07-15 21:45:26.907868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.138 [2024-07-15 21:45:26.907932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.138 [2024-07-15 21:45:26.907944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.138 [2024-07-15 21:45:26.907949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.138 [2024-07-15 21:45:26.907953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.138 [2024-07-15 21:45:26.907963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-07-15 21:45:26.918022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.138 [2024-07-15 21:45:26.918091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.138 [2024-07-15 21:45:26.918104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.138 [2024-07-15 21:45:26.918108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.138 [2024-07-15 21:45:26.918112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.138 [2024-07-15 21:45:26.918126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-07-15 21:45:26.928032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.138 [2024-07-15 21:45:26.928139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.138 [2024-07-15 21:45:26.928153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.138 [2024-07-15 21:45:26.928158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.138 [2024-07-15 21:45:26.928162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.138 [2024-07-15 21:45:26.928173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-07-15 21:45:26.938056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.138 [2024-07-15 21:45:26.938117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.138 [2024-07-15 21:45:26.938132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.138 [2024-07-15 21:45:26.938137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.138 [2024-07-15 21:45:26.938141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.138 [2024-07-15 21:45:26.938152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.401 [2024-07-15 21:45:26.948074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.401 [2024-07-15 21:45:26.948141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.401 [2024-07-15 21:45:26.948153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.401 [2024-07-15 21:45:26.948158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.401 [2024-07-15 21:45:26.948162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.401 [2024-07-15 21:45:26.948173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.401 qpair failed and we were unable to recover it. 00:29:37.401 [2024-07-15 21:45:26.958136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.401 [2024-07-15 21:45:26.958201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.401 [2024-07-15 21:45:26.958213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.401 [2024-07-15 21:45:26.958218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.401 [2024-07-15 21:45:26.958222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.401 [2024-07-15 21:45:26.958233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.401 qpair failed and we were unable to recover it. 00:29:37.401 [2024-07-15 21:45:26.968162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.401 [2024-07-15 21:45:26.968222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.401 [2024-07-15 21:45:26.968234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.401 [2024-07-15 21:45:26.968239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.401 [2024-07-15 21:45:26.968244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.401 [2024-07-15 21:45:26.968254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.402 qpair failed and we were unable to recover it. 00:29:37.402 [2024-07-15 21:45:26.978160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.402 [2024-07-15 21:45:26.978231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.402 [2024-07-15 21:45:26.978245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.402 [2024-07-15 21:45:26.978250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.402 [2024-07-15 21:45:26.978254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.402 [2024-07-15 21:45:26.978265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.402 qpair failed and we were unable to recover it. 00:29:37.402 [2024-07-15 21:45:26.988089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.402 [2024-07-15 21:45:26.988156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.402 [2024-07-15 21:45:26.988170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.402 [2024-07-15 21:45:26.988175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.402 [2024-07-15 21:45:26.988180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.402 [2024-07-15 21:45:26.988191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.402 qpair failed and we were unable to recover it. 00:29:37.402 [2024-07-15 21:45:26.998244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.402 [2024-07-15 21:45:26.998314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.402 [2024-07-15 21:45:26.998326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.402 [2024-07-15 21:45:26.998331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.402 [2024-07-15 21:45:26.998335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.402 [2024-07-15 21:45:26.998347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.402 qpair failed and we were unable to recover it. 00:29:37.402 [2024-07-15 21:45:27.008275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.402 [2024-07-15 21:45:27.008335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.402 [2024-07-15 21:45:27.008347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.402 [2024-07-15 21:45:27.008352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.402 [2024-07-15 21:45:27.008356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.402 [2024-07-15 21:45:27.008366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.402 qpair failed and we were unable to recover it. 00:29:37.402 [2024-07-15 21:45:27.018300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.402 [2024-07-15 21:45:27.018410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.402 [2024-07-15 21:45:27.018422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.402 [2024-07-15 21:45:27.018427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.402 [2024-07-15 21:45:27.018431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.402 [2024-07-15 21:45:27.018445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.402 qpair failed and we were unable to recover it. 00:29:37.402 [2024-07-15 21:45:27.028364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.402 [2024-07-15 21:45:27.028426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.402 [2024-07-15 21:45:27.028438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.402 [2024-07-15 21:45:27.028443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.402 [2024-07-15 21:45:27.028447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.402 [2024-07-15 21:45:27.028458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.402 qpair failed and we were unable to recover it. 00:29:37.402 [2024-07-15 21:45:27.038365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.402 [2024-07-15 21:45:27.038462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.402 [2024-07-15 21:45:27.038475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.402 [2024-07-15 21:45:27.038480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.402 [2024-07-15 21:45:27.038484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.402 [2024-07-15 21:45:27.038495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.402 qpair failed and we were unable to recover it. 00:29:37.402 [2024-07-15 21:45:27.048505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.402 [2024-07-15 21:45:27.048566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.402 [2024-07-15 21:45:27.048579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.402 [2024-07-15 21:45:27.048584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.402 [2024-07-15 21:45:27.048588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.402 [2024-07-15 21:45:27.048599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.402 qpair failed and we were unable to recover it. 00:29:37.402 [2024-07-15 21:45:27.058290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.402 [2024-07-15 21:45:27.058350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.402 [2024-07-15 21:45:27.058362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.402 [2024-07-15 21:45:27.058367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.402 [2024-07-15 21:45:27.058371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.402 [2024-07-15 21:45:27.058383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.402 qpair failed and we were unable to recover it. 00:29:37.402 [2024-07-15 21:45:27.068371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.402 [2024-07-15 21:45:27.068466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.402 [2024-07-15 21:45:27.068482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.402 [2024-07-15 21:45:27.068487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.402 [2024-07-15 21:45:27.068491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.402 [2024-07-15 21:45:27.068502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.402 qpair failed and we were unable to recover it. 00:29:37.402 [2024-07-15 21:45:27.078328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.402 [2024-07-15 21:45:27.078509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.402 [2024-07-15 21:45:27.078521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.402 [2024-07-15 21:45:27.078526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.402 [2024-07-15 21:45:27.078530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.402 [2024-07-15 21:45:27.078541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.402 qpair failed and we were unable to recover it. 00:29:37.402 [2024-07-15 21:45:27.088433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.402 [2024-07-15 21:45:27.088494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.402 [2024-07-15 21:45:27.088506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.402 [2024-07-15 21:45:27.088511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.402 [2024-07-15 21:45:27.088516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.402 [2024-07-15 21:45:27.088526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.402 qpair failed and we were unable to recover it. 00:29:37.402 [2024-07-15 21:45:27.098517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.402 [2024-07-15 21:45:27.098579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.402 [2024-07-15 21:45:27.098591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.402 [2024-07-15 21:45:27.098596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.402 [2024-07-15 21:45:27.098600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.402 [2024-07-15 21:45:27.098611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.402 qpair failed and we were unable to recover it. 00:29:37.402 [2024-07-15 21:45:27.108561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.402 [2024-07-15 21:45:27.108633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.402 [2024-07-15 21:45:27.108645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.402 [2024-07-15 21:45:27.108650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.402 [2024-07-15 21:45:27.108657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.402 [2024-07-15 21:45:27.108667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.402 qpair failed and we were unable to recover it. 00:29:37.402 [2024-07-15 21:45:27.118574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.403 [2024-07-15 21:45:27.118647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.403 [2024-07-15 21:45:27.118659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.403 [2024-07-15 21:45:27.118664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.403 [2024-07-15 21:45:27.118668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.403 [2024-07-15 21:45:27.118678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.403 qpair failed and we were unable to recover it. 00:29:37.403 [2024-07-15 21:45:27.128589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.403 [2024-07-15 21:45:27.128652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.403 [2024-07-15 21:45:27.128664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.403 [2024-07-15 21:45:27.128669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.403 [2024-07-15 21:45:27.128673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.403 [2024-07-15 21:45:27.128683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.403 qpair failed and we were unable to recover it. 00:29:37.403 [2024-07-15 21:45:27.138600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.403 [2024-07-15 21:45:27.138663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.403 [2024-07-15 21:45:27.138675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.403 [2024-07-15 21:45:27.138680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.403 [2024-07-15 21:45:27.138684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.403 [2024-07-15 21:45:27.138695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.403 qpair failed and we were unable to recover it. 00:29:37.403 [2024-07-15 21:45:27.148631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.403 [2024-07-15 21:45:27.148693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.403 [2024-07-15 21:45:27.148705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.403 [2024-07-15 21:45:27.148710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.403 [2024-07-15 21:45:27.148714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.403 [2024-07-15 21:45:27.148725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.403 qpair failed and we were unable to recover it. 00:29:37.403 [2024-07-15 21:45:27.158741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.403 [2024-07-15 21:45:27.158849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.403 [2024-07-15 21:45:27.158861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.403 [2024-07-15 21:45:27.158866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.403 [2024-07-15 21:45:27.158870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.403 [2024-07-15 21:45:27.158881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.403 qpair failed and we were unable to recover it. 00:29:37.403 [2024-07-15 21:45:27.168688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.403 [2024-07-15 21:45:27.168755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.403 [2024-07-15 21:45:27.168774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.403 [2024-07-15 21:45:27.168780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.403 [2024-07-15 21:45:27.168785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.403 [2024-07-15 21:45:27.168799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.403 qpair failed and we were unable to recover it. 00:29:37.403 [2024-07-15 21:45:27.178700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.403 [2024-07-15 21:45:27.178760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.403 [2024-07-15 21:45:27.178773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.403 [2024-07-15 21:45:27.178779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.403 [2024-07-15 21:45:27.178783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.403 [2024-07-15 21:45:27.178794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.403 qpair failed and we were unable to recover it. 00:29:37.403 [2024-07-15 21:45:27.188744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.403 [2024-07-15 21:45:27.188805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.403 [2024-07-15 21:45:27.188818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.403 [2024-07-15 21:45:27.188823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.403 [2024-07-15 21:45:27.188827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.403 [2024-07-15 21:45:27.188838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.403 qpair failed and we were unable to recover it. 00:29:37.403 [2024-07-15 21:45:27.198806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.403 [2024-07-15 21:45:27.198909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.403 [2024-07-15 21:45:27.198921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.403 [2024-07-15 21:45:27.198930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.403 [2024-07-15 21:45:27.198934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.403 [2024-07-15 21:45:27.198945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.403 qpair failed and we were unable to recover it. 00:29:37.665 [2024-07-15 21:45:27.208803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.665 [2024-07-15 21:45:27.208866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.665 [2024-07-15 21:45:27.208879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.665 [2024-07-15 21:45:27.208883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.665 [2024-07-15 21:45:27.208888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.665 [2024-07-15 21:45:27.208898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.665 qpair failed and we were unable to recover it. 00:29:37.665 [2024-07-15 21:45:27.218822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.665 [2024-07-15 21:45:27.218883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.665 [2024-07-15 21:45:27.218895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.665 [2024-07-15 21:45:27.218900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.665 [2024-07-15 21:45:27.218904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.665 [2024-07-15 21:45:27.218915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.665 qpair failed and we were unable to recover it. 00:29:37.665 [2024-07-15 21:45:27.228858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.665 [2024-07-15 21:45:27.228924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.665 [2024-07-15 21:45:27.228937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.666 [2024-07-15 21:45:27.228942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.666 [2024-07-15 21:45:27.228946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.666 [2024-07-15 21:45:27.228959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.666 qpair failed and we were unable to recover it. 00:29:37.666 [2024-07-15 21:45:27.238892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.666 [2024-07-15 21:45:27.238960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.666 [2024-07-15 21:45:27.238972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.666 [2024-07-15 21:45:27.238977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.666 [2024-07-15 21:45:27.238981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.666 [2024-07-15 21:45:27.238992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.666 qpair failed and we were unable to recover it. 00:29:37.666 [2024-07-15 21:45:27.248898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.666 [2024-07-15 21:45:27.248957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.666 [2024-07-15 21:45:27.248969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.666 [2024-07-15 21:45:27.248974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.666 [2024-07-15 21:45:27.248978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.666 [2024-07-15 21:45:27.248989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.666 qpair failed and we were unable to recover it. 00:29:37.666 [2024-07-15 21:45:27.258903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.666 [2024-07-15 21:45:27.258965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.666 [2024-07-15 21:45:27.258977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.666 [2024-07-15 21:45:27.258981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.666 [2024-07-15 21:45:27.258986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.666 [2024-07-15 21:45:27.258996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.666 qpair failed and we were unable to recover it. 00:29:37.666 [2024-07-15 21:45:27.268957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.666 [2024-07-15 21:45:27.269026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.666 [2024-07-15 21:45:27.269045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.666 [2024-07-15 21:45:27.269051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.666 [2024-07-15 21:45:27.269055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.666 [2024-07-15 21:45:27.269069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.666 qpair failed and we were unable to recover it. 00:29:37.666 [2024-07-15 21:45:27.279009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.666 [2024-07-15 21:45:27.279079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.666 [2024-07-15 21:45:27.279092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.666 [2024-07-15 21:45:27.279097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.666 [2024-07-15 21:45:27.279101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.666 [2024-07-15 21:45:27.279112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.666 qpair failed and we were unable to recover it. 00:29:37.666 [2024-07-15 21:45:27.289030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.666 [2024-07-15 21:45:27.289095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.666 [2024-07-15 21:45:27.289108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.666 [2024-07-15 21:45:27.289116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.666 [2024-07-15 21:45:27.289120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.666 [2024-07-15 21:45:27.289135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.666 qpair failed and we were unable to recover it. 00:29:37.666 [2024-07-15 21:45:27.299056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.666 [2024-07-15 21:45:27.299119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.666 [2024-07-15 21:45:27.299133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.666 [2024-07-15 21:45:27.299138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.666 [2024-07-15 21:45:27.299143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.666 [2024-07-15 21:45:27.299153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.666 qpair failed and we were unable to recover it. 00:29:37.666 [2024-07-15 21:45:27.309059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.666 [2024-07-15 21:45:27.309125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.666 [2024-07-15 21:45:27.309138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.666 [2024-07-15 21:45:27.309144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.666 [2024-07-15 21:45:27.309148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.666 [2024-07-15 21:45:27.309159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.666 qpair failed and we were unable to recover it. 00:29:37.666 [2024-07-15 21:45:27.319109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.666 [2024-07-15 21:45:27.319198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.666 [2024-07-15 21:45:27.319210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.666 [2024-07-15 21:45:27.319215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.666 [2024-07-15 21:45:27.319219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.666 [2024-07-15 21:45:27.319230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.666 qpair failed and we were unable to recover it. 00:29:37.666 [2024-07-15 21:45:27.329128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.666 [2024-07-15 21:45:27.329190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.666 [2024-07-15 21:45:27.329202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.666 [2024-07-15 21:45:27.329207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.666 [2024-07-15 21:45:27.329212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.666 [2024-07-15 21:45:27.329222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.666 qpair failed and we were unable to recover it. 00:29:37.666 [2024-07-15 21:45:27.339197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.666 [2024-07-15 21:45:27.339256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.666 [2024-07-15 21:45:27.339268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.666 [2024-07-15 21:45:27.339273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.666 [2024-07-15 21:45:27.339277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.666 [2024-07-15 21:45:27.339288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.666 qpair failed and we were unable to recover it. 00:29:37.666 [2024-07-15 21:45:27.349181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.666 [2024-07-15 21:45:27.349248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.666 [2024-07-15 21:45:27.349260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.666 [2024-07-15 21:45:27.349265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.666 [2024-07-15 21:45:27.349269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.666 [2024-07-15 21:45:27.349280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.666 qpair failed and we were unable to recover it. 00:29:37.666 [2024-07-15 21:45:27.359223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.666 [2024-07-15 21:45:27.359288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.666 [2024-07-15 21:45:27.359300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.666 [2024-07-15 21:45:27.359305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.666 [2024-07-15 21:45:27.359309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.666 [2024-07-15 21:45:27.359319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.666 qpair failed and we were unable to recover it. 00:29:37.666 [2024-07-15 21:45:27.369261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.666 [2024-07-15 21:45:27.369329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.666 [2024-07-15 21:45:27.369341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.666 [2024-07-15 21:45:27.369346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.666 [2024-07-15 21:45:27.369350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.667 [2024-07-15 21:45:27.369361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.667 qpair failed and we were unable to recover it. 00:29:37.667 [2024-07-15 21:45:27.379264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.667 [2024-07-15 21:45:27.379324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.667 [2024-07-15 21:45:27.379342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.667 [2024-07-15 21:45:27.379347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.667 [2024-07-15 21:45:27.379351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.667 [2024-07-15 21:45:27.379362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.667 qpair failed and we were unable to recover it. 00:29:37.667 [2024-07-15 21:45:27.389283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.667 [2024-07-15 21:45:27.389350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.667 [2024-07-15 21:45:27.389364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.667 [2024-07-15 21:45:27.389369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.667 [2024-07-15 21:45:27.389373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.667 [2024-07-15 21:45:27.389384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.667 qpair failed and we were unable to recover it. 00:29:37.667 [2024-07-15 21:45:27.399359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.667 [2024-07-15 21:45:27.399447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.667 [2024-07-15 21:45:27.399459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.667 [2024-07-15 21:45:27.399464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.667 [2024-07-15 21:45:27.399468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.667 [2024-07-15 21:45:27.399478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.667 qpair failed and we were unable to recover it. 00:29:37.667 [2024-07-15 21:45:27.409333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.667 [2024-07-15 21:45:27.409393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.667 [2024-07-15 21:45:27.409405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.667 [2024-07-15 21:45:27.409410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.667 [2024-07-15 21:45:27.409414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.667 [2024-07-15 21:45:27.409424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.667 qpair failed and we were unable to recover it. 00:29:37.667 [2024-07-15 21:45:27.419391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.667 [2024-07-15 21:45:27.419452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.667 [2024-07-15 21:45:27.419464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.667 [2024-07-15 21:45:27.419469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.667 [2024-07-15 21:45:27.419473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.667 [2024-07-15 21:45:27.419486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.667 qpair failed and we were unable to recover it. 00:29:37.667 [2024-07-15 21:45:27.429411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.667 [2024-07-15 21:45:27.429472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.667 [2024-07-15 21:45:27.429485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.667 [2024-07-15 21:45:27.429490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.667 [2024-07-15 21:45:27.429494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.667 [2024-07-15 21:45:27.429504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.667 qpair failed and we were unable to recover it. 00:29:37.667 [2024-07-15 21:45:27.439415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.667 [2024-07-15 21:45:27.439483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.667 [2024-07-15 21:45:27.439495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.667 [2024-07-15 21:45:27.439500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.667 [2024-07-15 21:45:27.439504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.667 [2024-07-15 21:45:27.439514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.667 qpair failed and we were unable to recover it. 00:29:37.667 [2024-07-15 21:45:27.449385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.667 [2024-07-15 21:45:27.449464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.667 [2024-07-15 21:45:27.449476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.667 [2024-07-15 21:45:27.449481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.667 [2024-07-15 21:45:27.449485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.667 [2024-07-15 21:45:27.449496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.667 qpair failed and we were unable to recover it. 00:29:37.667 [2024-07-15 21:45:27.459517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.667 [2024-07-15 21:45:27.459579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.667 [2024-07-15 21:45:27.459591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.667 [2024-07-15 21:45:27.459596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.667 [2024-07-15 21:45:27.459600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.667 [2024-07-15 21:45:27.459610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.667 qpair failed and we were unable to recover it. 00:29:37.667 [2024-07-15 21:45:27.469527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.933 [2024-07-15 21:45:27.469588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.933 [2024-07-15 21:45:27.469604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.933 [2024-07-15 21:45:27.469609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.933 [2024-07-15 21:45:27.469613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.933 [2024-07-15 21:45:27.469624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.933 qpair failed and we were unable to recover it. 00:29:37.933 [2024-07-15 21:45:27.479533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.933 [2024-07-15 21:45:27.479597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.933 [2024-07-15 21:45:27.479610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.933 [2024-07-15 21:45:27.479615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.933 [2024-07-15 21:45:27.479619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.933 [2024-07-15 21:45:27.479630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.933 qpair failed and we were unable to recover it. 00:29:37.933 [2024-07-15 21:45:27.489555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.933 [2024-07-15 21:45:27.489616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.933 [2024-07-15 21:45:27.489628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.933 [2024-07-15 21:45:27.489633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.933 [2024-07-15 21:45:27.489637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.933 [2024-07-15 21:45:27.489647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.933 qpair failed and we were unable to recover it. 00:29:37.933 [2024-07-15 21:45:27.499586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.933 [2024-07-15 21:45:27.499648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.933 [2024-07-15 21:45:27.499660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.933 [2024-07-15 21:45:27.499665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.933 [2024-07-15 21:45:27.499669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.933 [2024-07-15 21:45:27.499680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.933 qpair failed and we were unable to recover it. 00:29:37.933 [2024-07-15 21:45:27.509624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.933 [2024-07-15 21:45:27.509737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.933 [2024-07-15 21:45:27.509750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.933 [2024-07-15 21:45:27.509755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.933 [2024-07-15 21:45:27.509761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.933 [2024-07-15 21:45:27.509772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.933 qpair failed and we were unable to recover it. 00:29:37.933 [2024-07-15 21:45:27.519537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.933 [2024-07-15 21:45:27.519651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.933 [2024-07-15 21:45:27.519663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.933 [2024-07-15 21:45:27.519668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.933 [2024-07-15 21:45:27.519672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.933 [2024-07-15 21:45:27.519683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.933 qpair failed and we were unable to recover it. 00:29:37.933 [2024-07-15 21:45:27.529683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.933 [2024-07-15 21:45:27.529746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.933 [2024-07-15 21:45:27.529759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.933 [2024-07-15 21:45:27.529764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.933 [2024-07-15 21:45:27.529768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.933 [2024-07-15 21:45:27.529778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.933 qpair failed and we were unable to recover it. 00:29:37.933 [2024-07-15 21:45:27.539746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.933 [2024-07-15 21:45:27.539811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.933 [2024-07-15 21:45:27.539829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.933 [2024-07-15 21:45:27.539835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.934 [2024-07-15 21:45:27.539840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.934 [2024-07-15 21:45:27.539855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.934 qpair failed and we were unable to recover it. 00:29:37.934 [2024-07-15 21:45:27.549621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.934 [2024-07-15 21:45:27.549692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.934 [2024-07-15 21:45:27.549711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.934 [2024-07-15 21:45:27.549717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.934 [2024-07-15 21:45:27.549721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.934 [2024-07-15 21:45:27.549736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.934 qpair failed and we were unable to recover it. 00:29:37.934 [2024-07-15 21:45:27.559780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.934 [2024-07-15 21:45:27.559861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.934 [2024-07-15 21:45:27.559879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.934 [2024-07-15 21:45:27.559885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.934 [2024-07-15 21:45:27.559890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.934 [2024-07-15 21:45:27.559904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.934 qpair failed and we were unable to recover it. 00:29:37.934 [2024-07-15 21:45:27.569796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.934 [2024-07-15 21:45:27.569860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.934 [2024-07-15 21:45:27.569874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.934 [2024-07-15 21:45:27.569880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.934 [2024-07-15 21:45:27.569885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.934 [2024-07-15 21:45:27.569897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.934 qpair failed and we were unable to recover it. 00:29:37.934 [2024-07-15 21:45:27.579810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.934 [2024-07-15 21:45:27.579880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.934 [2024-07-15 21:45:27.579898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.934 [2024-07-15 21:45:27.579904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.934 [2024-07-15 21:45:27.579909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.934 [2024-07-15 21:45:27.579923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.934 qpair failed and we were unable to recover it. 00:29:37.934 [2024-07-15 21:45:27.589890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.934 [2024-07-15 21:45:27.590003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.934 [2024-07-15 21:45:27.590018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.934 [2024-07-15 21:45:27.590023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.934 [2024-07-15 21:45:27.590028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.934 [2024-07-15 21:45:27.590039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.934 qpair failed and we were unable to recover it. 00:29:37.934 [2024-07-15 21:45:27.599878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.934 [2024-07-15 21:45:27.599944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.934 [2024-07-15 21:45:27.599956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.934 [2024-07-15 21:45:27.599961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.934 [2024-07-15 21:45:27.599969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.934 [2024-07-15 21:45:27.599980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.934 qpair failed and we were unable to recover it. 00:29:37.934 [2024-07-15 21:45:27.609897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.934 [2024-07-15 21:45:27.609961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.934 [2024-07-15 21:45:27.609974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.934 [2024-07-15 21:45:27.609979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.934 [2024-07-15 21:45:27.609983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.934 [2024-07-15 21:45:27.609994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.934 qpair failed and we were unable to recover it. 00:29:37.934 [2024-07-15 21:45:27.619941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.934 [2024-07-15 21:45:27.620002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.934 [2024-07-15 21:45:27.620014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.934 [2024-07-15 21:45:27.620019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.934 [2024-07-15 21:45:27.620024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.934 [2024-07-15 21:45:27.620035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.934 qpair failed and we were unable to recover it. 00:29:37.934 [2024-07-15 21:45:27.629936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.934 [2024-07-15 21:45:27.630001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.934 [2024-07-15 21:45:27.630014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.934 [2024-07-15 21:45:27.630019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.934 [2024-07-15 21:45:27.630023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.934 [2024-07-15 21:45:27.630034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.934 qpair failed and we were unable to recover it. 00:29:37.934 [2024-07-15 21:45:27.639973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.934 [2024-07-15 21:45:27.640042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.934 [2024-07-15 21:45:27.640054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.934 [2024-07-15 21:45:27.640059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.934 [2024-07-15 21:45:27.640063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.934 [2024-07-15 21:45:27.640074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.934 qpair failed and we were unable to recover it. 00:29:37.934 [2024-07-15 21:45:27.650037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.934 [2024-07-15 21:45:27.650100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.934 [2024-07-15 21:45:27.650112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.934 [2024-07-15 21:45:27.650117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.934 [2024-07-15 21:45:27.650125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.934 [2024-07-15 21:45:27.650136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.934 qpair failed and we were unable to recover it. 00:29:37.934 [2024-07-15 21:45:27.660013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.934 [2024-07-15 21:45:27.660079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.934 [2024-07-15 21:45:27.660091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.934 [2024-07-15 21:45:27.660096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.934 [2024-07-15 21:45:27.660100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.934 [2024-07-15 21:45:27.660111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.934 qpair failed and we were unable to recover it. 00:29:37.934 [2024-07-15 21:45:27.670052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.934 [2024-07-15 21:45:27.670117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.934 [2024-07-15 21:45:27.670133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.934 [2024-07-15 21:45:27.670138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.934 [2024-07-15 21:45:27.670142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.934 [2024-07-15 21:45:27.670153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.934 qpair failed and we were unable to recover it. 00:29:37.934 [2024-07-15 21:45:27.680084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.934 [2024-07-15 21:45:27.680151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.934 [2024-07-15 21:45:27.680164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.934 [2024-07-15 21:45:27.680168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.934 [2024-07-15 21:45:27.680173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.935 [2024-07-15 21:45:27.680183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.935 qpair failed and we were unable to recover it. 00:29:37.935 [2024-07-15 21:45:27.690132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.935 [2024-07-15 21:45:27.690194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.935 [2024-07-15 21:45:27.690206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.935 [2024-07-15 21:45:27.690214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.935 [2024-07-15 21:45:27.690218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.935 [2024-07-15 21:45:27.690229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.935 qpair failed and we were unable to recover it. 00:29:37.935 [2024-07-15 21:45:27.700134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.935 [2024-07-15 21:45:27.700193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.935 [2024-07-15 21:45:27.700205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.935 [2024-07-15 21:45:27.700210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.935 [2024-07-15 21:45:27.700214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.935 [2024-07-15 21:45:27.700225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.935 qpair failed and we were unable to recover it. 00:29:37.935 [2024-07-15 21:45:27.710160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.935 [2024-07-15 21:45:27.710223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.935 [2024-07-15 21:45:27.710235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.935 [2024-07-15 21:45:27.710240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.935 [2024-07-15 21:45:27.710244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.935 [2024-07-15 21:45:27.710255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.935 qpair failed and we were unable to recover it. 00:29:37.935 [2024-07-15 21:45:27.720073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.935 [2024-07-15 21:45:27.720144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.935 [2024-07-15 21:45:27.720156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.935 [2024-07-15 21:45:27.720161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.935 [2024-07-15 21:45:27.720165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.935 [2024-07-15 21:45:27.720176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.935 qpair failed and we were unable to recover it. 00:29:37.935 [2024-07-15 21:45:27.730217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.935 [2024-07-15 21:45:27.730281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.935 [2024-07-15 21:45:27.730293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.935 [2024-07-15 21:45:27.730297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.935 [2024-07-15 21:45:27.730302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:37.935 [2024-07-15 21:45:27.730312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.935 qpair failed and we were unable to recover it. 00:29:38.197 [2024-07-15 21:45:27.740238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.197 [2024-07-15 21:45:27.740306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.197 [2024-07-15 21:45:27.740319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.197 [2024-07-15 21:45:27.740324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.197 [2024-07-15 21:45:27.740329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.197 [2024-07-15 21:45:27.740340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.197 qpair failed and we were unable to recover it. 00:29:38.197 [2024-07-15 21:45:27.750304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.197 [2024-07-15 21:45:27.750365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.197 [2024-07-15 21:45:27.750377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.197 [2024-07-15 21:45:27.750382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.197 [2024-07-15 21:45:27.750386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.197 [2024-07-15 21:45:27.750397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.197 qpair failed and we were unable to recover it. 00:29:38.197 [2024-07-15 21:45:27.760298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.197 [2024-07-15 21:45:27.760364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.197 [2024-07-15 21:45:27.760376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.197 [2024-07-15 21:45:27.760381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.197 [2024-07-15 21:45:27.760385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.197 [2024-07-15 21:45:27.760396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.197 qpair failed and we were unable to recover it. 00:29:38.197 [2024-07-15 21:45:27.770352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.197 [2024-07-15 21:45:27.770453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.197 [2024-07-15 21:45:27.770466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.197 [2024-07-15 21:45:27.770472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.197 [2024-07-15 21:45:27.770476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.197 [2024-07-15 21:45:27.770487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.197 qpair failed and we were unable to recover it. 00:29:38.197 [2024-07-15 21:45:27.780381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.198 [2024-07-15 21:45:27.780437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.198 [2024-07-15 21:45:27.780453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.198 [2024-07-15 21:45:27.780458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.198 [2024-07-15 21:45:27.780462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.198 [2024-07-15 21:45:27.780472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.198 qpair failed and we were unable to recover it. 00:29:38.198 [2024-07-15 21:45:27.790373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.198 [2024-07-15 21:45:27.790437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.198 [2024-07-15 21:45:27.790450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.198 [2024-07-15 21:45:27.790456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.198 [2024-07-15 21:45:27.790460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.198 [2024-07-15 21:45:27.790470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.198 qpair failed and we were unable to recover it. 00:29:38.198 [2024-07-15 21:45:27.800396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.198 [2024-07-15 21:45:27.800459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.198 [2024-07-15 21:45:27.800472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.198 [2024-07-15 21:45:27.800476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.198 [2024-07-15 21:45:27.800481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.198 [2024-07-15 21:45:27.800491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.198 qpair failed and we were unable to recover it. 00:29:38.198 [2024-07-15 21:45:27.810484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.198 [2024-07-15 21:45:27.810589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.198 [2024-07-15 21:45:27.810601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.198 [2024-07-15 21:45:27.810605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.198 [2024-07-15 21:45:27.810609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.198 [2024-07-15 21:45:27.810620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.198 qpair failed and we were unable to recover it. 00:29:38.198 [2024-07-15 21:45:27.820490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.198 [2024-07-15 21:45:27.820593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.198 [2024-07-15 21:45:27.820605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.198 [2024-07-15 21:45:27.820610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.198 [2024-07-15 21:45:27.820614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.198 [2024-07-15 21:45:27.820628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.198 qpair failed and we were unable to recover it. 00:29:38.198 [2024-07-15 21:45:27.830627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.198 [2024-07-15 21:45:27.830691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.198 [2024-07-15 21:45:27.830703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.198 [2024-07-15 21:45:27.830708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.198 [2024-07-15 21:45:27.830713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.198 [2024-07-15 21:45:27.830723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.198 qpair failed and we were unable to recover it. 00:29:38.198 [2024-07-15 21:45:27.840503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.198 [2024-07-15 21:45:27.840569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.198 [2024-07-15 21:45:27.840581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.198 [2024-07-15 21:45:27.840586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.198 [2024-07-15 21:45:27.840590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.198 [2024-07-15 21:45:27.840601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.198 qpair failed and we were unable to recover it. 00:29:38.198 [2024-07-15 21:45:27.850528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.198 [2024-07-15 21:45:27.850589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.198 [2024-07-15 21:45:27.850601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.198 [2024-07-15 21:45:27.850606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.198 [2024-07-15 21:45:27.850610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.198 [2024-07-15 21:45:27.850621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.198 qpair failed and we were unable to recover it. 00:29:38.198 [2024-07-15 21:45:27.860570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.198 [2024-07-15 21:45:27.860629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.198 [2024-07-15 21:45:27.860641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.198 [2024-07-15 21:45:27.860646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.198 [2024-07-15 21:45:27.860650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.198 [2024-07-15 21:45:27.860661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.198 qpair failed and we were unable to recover it. 00:29:38.198 [2024-07-15 21:45:27.870581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.198 [2024-07-15 21:45:27.870644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.198 [2024-07-15 21:45:27.870659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.198 [2024-07-15 21:45:27.870664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.198 [2024-07-15 21:45:27.870668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.198 [2024-07-15 21:45:27.870679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.198 qpair failed and we were unable to recover it. 00:29:38.198 [2024-07-15 21:45:27.880617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.198 [2024-07-15 21:45:27.880680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.198 [2024-07-15 21:45:27.880692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.198 [2024-07-15 21:45:27.880697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.198 [2024-07-15 21:45:27.880701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.198 [2024-07-15 21:45:27.880711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.198 qpair failed and we were unable to recover it. 00:29:38.198 [2024-07-15 21:45:27.890693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.198 [2024-07-15 21:45:27.890761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.198 [2024-07-15 21:45:27.890775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.198 [2024-07-15 21:45:27.890781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.198 [2024-07-15 21:45:27.890785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.198 [2024-07-15 21:45:27.890796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.198 qpair failed and we were unable to recover it. 00:29:38.198 [2024-07-15 21:45:27.900663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.199 [2024-07-15 21:45:27.900723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.199 [2024-07-15 21:45:27.900736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.199 [2024-07-15 21:45:27.900741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.199 [2024-07-15 21:45:27.900745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.199 [2024-07-15 21:45:27.900757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.199 qpair failed and we were unable to recover it. 00:29:38.199 [2024-07-15 21:45:27.910693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.199 [2024-07-15 21:45:27.910754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.199 [2024-07-15 21:45:27.910766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.199 [2024-07-15 21:45:27.910771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.199 [2024-07-15 21:45:27.910779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.199 [2024-07-15 21:45:27.910789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.199 qpair failed and we were unable to recover it. 00:29:38.199 [2024-07-15 21:45:27.920606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.199 [2024-07-15 21:45:27.920672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.199 [2024-07-15 21:45:27.920684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.199 [2024-07-15 21:45:27.920689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.199 [2024-07-15 21:45:27.920693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.199 [2024-07-15 21:45:27.920704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.199 qpair failed and we were unable to recover it. 00:29:38.199 [2024-07-15 21:45:27.930730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.199 [2024-07-15 21:45:27.930805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.199 [2024-07-15 21:45:27.930817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.199 [2024-07-15 21:45:27.930822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.199 [2024-07-15 21:45:27.930826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.199 [2024-07-15 21:45:27.930837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.199 qpair failed and we were unable to recover it. 00:29:38.199 [2024-07-15 21:45:27.940768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.199 [2024-07-15 21:45:27.940848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.199 [2024-07-15 21:45:27.940867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.199 [2024-07-15 21:45:27.940873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.199 [2024-07-15 21:45:27.940877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.199 [2024-07-15 21:45:27.940892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.199 qpair failed and we were unable to recover it. 00:29:38.199 [2024-07-15 21:45:27.950765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.199 [2024-07-15 21:45:27.950835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.199 [2024-07-15 21:45:27.950853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.199 [2024-07-15 21:45:27.950859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.199 [2024-07-15 21:45:27.950864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.199 [2024-07-15 21:45:27.950878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.199 qpair failed and we were unable to recover it. 00:29:38.199 [2024-07-15 21:45:27.960865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.199 [2024-07-15 21:45:27.961039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.199 [2024-07-15 21:45:27.961052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.199 [2024-07-15 21:45:27.961057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.199 [2024-07-15 21:45:27.961061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.199 [2024-07-15 21:45:27.961073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.199 qpair failed and we were unable to recover it. 00:29:38.199 [2024-07-15 21:45:27.970846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.199 [2024-07-15 21:45:27.970909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.199 [2024-07-15 21:45:27.970921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.199 [2024-07-15 21:45:27.970926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.199 [2024-07-15 21:45:27.970930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.199 [2024-07-15 21:45:27.970941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.199 qpair failed and we were unable to recover it. 00:29:38.199 [2024-07-15 21:45:27.980869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.199 [2024-07-15 21:45:27.980933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.199 [2024-07-15 21:45:27.980945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.199 [2024-07-15 21:45:27.980950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.199 [2024-07-15 21:45:27.980954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.199 [2024-07-15 21:45:27.980965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.199 qpair failed and we were unable to recover it. 00:29:38.199 [2024-07-15 21:45:27.990918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.199 [2024-07-15 21:45:27.991031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.199 [2024-07-15 21:45:27.991043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.199 [2024-07-15 21:45:27.991048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.199 [2024-07-15 21:45:27.991052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.199 [2024-07-15 21:45:27.991063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.199 qpair failed and we were unable to recover it. 00:29:38.199 [2024-07-15 21:45:28.000889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.199 [2024-07-15 21:45:28.000954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.199 [2024-07-15 21:45:28.000966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.199 [2024-07-15 21:45:28.000971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.199 [2024-07-15 21:45:28.000978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.199 [2024-07-15 21:45:28.000989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.199 qpair failed and we were unable to recover it. 00:29:38.462 [2024-07-15 21:45:28.010966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.462 [2024-07-15 21:45:28.011026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.462 [2024-07-15 21:45:28.011038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.462 [2024-07-15 21:45:28.011043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.462 [2024-07-15 21:45:28.011047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.462 [2024-07-15 21:45:28.011057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-07-15 21:45:28.020983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.462 [2024-07-15 21:45:28.021045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.462 [2024-07-15 21:45:28.021057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.462 [2024-07-15 21:45:28.021061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.462 [2024-07-15 21:45:28.021066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.462 [2024-07-15 21:45:28.021076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-07-15 21:45:28.031008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.462 [2024-07-15 21:45:28.031070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.462 [2024-07-15 21:45:28.031082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.462 [2024-07-15 21:45:28.031087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.462 [2024-07-15 21:45:28.031091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.462 [2024-07-15 21:45:28.031102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-07-15 21:45:28.041041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.462 [2024-07-15 21:45:28.041155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.462 [2024-07-15 21:45:28.041168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.462 [2024-07-15 21:45:28.041172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.462 [2024-07-15 21:45:28.041176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.462 [2024-07-15 21:45:28.041187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-07-15 21:45:28.051057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.462 [2024-07-15 21:45:28.051127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.462 [2024-07-15 21:45:28.051139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.462 [2024-07-15 21:45:28.051144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.462 [2024-07-15 21:45:28.051148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.462 [2024-07-15 21:45:28.051159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-07-15 21:45:28.061029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.462 [2024-07-15 21:45:28.061094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.462 [2024-07-15 21:45:28.061106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.462 [2024-07-15 21:45:28.061110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.462 [2024-07-15 21:45:28.061114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.462 [2024-07-15 21:45:28.061128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-07-15 21:45:28.071115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.462 [2024-07-15 21:45:28.071182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.462 [2024-07-15 21:45:28.071194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.462 [2024-07-15 21:45:28.071198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.462 [2024-07-15 21:45:28.071202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.462 [2024-07-15 21:45:28.071213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.462 qpair failed and we were unable to recover it. 00:29:38.462 [2024-07-15 21:45:28.081155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.462 [2024-07-15 21:45:28.081221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.462 [2024-07-15 21:45:28.081233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.463 [2024-07-15 21:45:28.081238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.463 [2024-07-15 21:45:28.081242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.463 [2024-07-15 21:45:28.081253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-07-15 21:45:28.091063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.463 [2024-07-15 21:45:28.091132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.463 [2024-07-15 21:45:28.091146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.463 [2024-07-15 21:45:28.091154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.463 [2024-07-15 21:45:28.091158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.463 [2024-07-15 21:45:28.091170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-07-15 21:45:28.101207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.463 [2024-07-15 21:45:28.101270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.463 [2024-07-15 21:45:28.101282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.463 [2024-07-15 21:45:28.101287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.463 [2024-07-15 21:45:28.101291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.463 [2024-07-15 21:45:28.101302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-07-15 21:45:28.111216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.463 [2024-07-15 21:45:28.111276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.463 [2024-07-15 21:45:28.111287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.463 [2024-07-15 21:45:28.111292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.463 [2024-07-15 21:45:28.111296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.463 [2024-07-15 21:45:28.111307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-07-15 21:45:28.121260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.463 [2024-07-15 21:45:28.121330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.463 [2024-07-15 21:45:28.121341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.463 [2024-07-15 21:45:28.121346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.463 [2024-07-15 21:45:28.121350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.463 [2024-07-15 21:45:28.121360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-07-15 21:45:28.131283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.463 [2024-07-15 21:45:28.131349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.463 [2024-07-15 21:45:28.131361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.463 [2024-07-15 21:45:28.131366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.463 [2024-07-15 21:45:28.131370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.463 [2024-07-15 21:45:28.131380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-07-15 21:45:28.141365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.463 [2024-07-15 21:45:28.141443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.463 [2024-07-15 21:45:28.141455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.463 [2024-07-15 21:45:28.141460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.463 [2024-07-15 21:45:28.141464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.463 [2024-07-15 21:45:28.141475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-07-15 21:45:28.151236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.463 [2024-07-15 21:45:28.151301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.463 [2024-07-15 21:45:28.151313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.463 [2024-07-15 21:45:28.151318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.463 [2024-07-15 21:45:28.151322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.463 [2024-07-15 21:45:28.151333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-07-15 21:45:28.161419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.463 [2024-07-15 21:45:28.161487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.463 [2024-07-15 21:45:28.161500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.463 [2024-07-15 21:45:28.161505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.463 [2024-07-15 21:45:28.161509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.463 [2024-07-15 21:45:28.161520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-07-15 21:45:28.171358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.463 [2024-07-15 21:45:28.171420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.463 [2024-07-15 21:45:28.171432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.463 [2024-07-15 21:45:28.171437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.463 [2024-07-15 21:45:28.171441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.463 [2024-07-15 21:45:28.171452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-07-15 21:45:28.181470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.463 [2024-07-15 21:45:28.181532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.463 [2024-07-15 21:45:28.181548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.463 [2024-07-15 21:45:28.181553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.463 [2024-07-15 21:45:28.181557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.463 [2024-07-15 21:45:28.181567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-07-15 21:45:28.191448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.463 [2024-07-15 21:45:28.191513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.463 [2024-07-15 21:45:28.191525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.463 [2024-07-15 21:45:28.191531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.463 [2024-07-15 21:45:28.191535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.463 [2024-07-15 21:45:28.191545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-07-15 21:45:28.201377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.463 [2024-07-15 21:45:28.201444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.463 [2024-07-15 21:45:28.201456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.463 [2024-07-15 21:45:28.201461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.463 [2024-07-15 21:45:28.201465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.463 [2024-07-15 21:45:28.201476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.463 qpair failed and we were unable to recover it. 00:29:38.463 [2024-07-15 21:45:28.211513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.463 [2024-07-15 21:45:28.211576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.463 [2024-07-15 21:45:28.211588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.463 [2024-07-15 21:45:28.211593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.464 [2024-07-15 21:45:28.211597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.464 [2024-07-15 21:45:28.211607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-07-15 21:45:28.221576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.464 [2024-07-15 21:45:28.221637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.464 [2024-07-15 21:45:28.221649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.464 [2024-07-15 21:45:28.221654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.464 [2024-07-15 21:45:28.221658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.464 [2024-07-15 21:45:28.221672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-07-15 21:45:28.231567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.464 [2024-07-15 21:45:28.231629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.464 [2024-07-15 21:45:28.231642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.464 [2024-07-15 21:45:28.231647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.464 [2024-07-15 21:45:28.231651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.464 [2024-07-15 21:45:28.231661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-07-15 21:45:28.241594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.464 [2024-07-15 21:45:28.241655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.464 [2024-07-15 21:45:28.241667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.464 [2024-07-15 21:45:28.241672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.464 [2024-07-15 21:45:28.241676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.464 [2024-07-15 21:45:28.241687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-07-15 21:45:28.251608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.464 [2024-07-15 21:45:28.251682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.464 [2024-07-15 21:45:28.251693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.464 [2024-07-15 21:45:28.251698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.464 [2024-07-15 21:45:28.251702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.464 [2024-07-15 21:45:28.251712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.464 [2024-07-15 21:45:28.261643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.464 [2024-07-15 21:45:28.261706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.464 [2024-07-15 21:45:28.261718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.464 [2024-07-15 21:45:28.261723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.464 [2024-07-15 21:45:28.261727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.464 [2024-07-15 21:45:28.261738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.464 qpair failed and we were unable to recover it. 00:29:38.726 [2024-07-15 21:45:28.271670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.726 [2024-07-15 21:45:28.271732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.726 [2024-07-15 21:45:28.271747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.726 [2024-07-15 21:45:28.271752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.726 [2024-07-15 21:45:28.271756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.726 [2024-07-15 21:45:28.271766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.726 qpair failed and we were unable to recover it. 00:29:38.726 [2024-07-15 21:45:28.281698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.726 [2024-07-15 21:45:28.281764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.726 [2024-07-15 21:45:28.281776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.726 [2024-07-15 21:45:28.281781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.726 [2024-07-15 21:45:28.281785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.726 [2024-07-15 21:45:28.281795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.726 qpair failed and we were unable to recover it. 00:29:38.726 [2024-07-15 21:45:28.291680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.726 [2024-07-15 21:45:28.291742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.726 [2024-07-15 21:45:28.291754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.726 [2024-07-15 21:45:28.291759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.726 [2024-07-15 21:45:28.291763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.726 [2024-07-15 21:45:28.291774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.726 qpair failed and we were unable to recover it. 00:29:38.726 [2024-07-15 21:45:28.301776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.726 [2024-07-15 21:45:28.301839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.726 [2024-07-15 21:45:28.301851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.726 [2024-07-15 21:45:28.301856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.726 [2024-07-15 21:45:28.301860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.726 [2024-07-15 21:45:28.301871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.726 qpair failed and we were unable to recover it. 00:29:38.726 [2024-07-15 21:45:28.311781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.726 [2024-07-15 21:45:28.311844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.726 [2024-07-15 21:45:28.311856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.726 [2024-07-15 21:45:28.311861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.726 [2024-07-15 21:45:28.311865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.726 [2024-07-15 21:45:28.311879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.726 qpair failed and we were unable to recover it. 00:29:38.726 [2024-07-15 21:45:28.321804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.726 [2024-07-15 21:45:28.321867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.726 [2024-07-15 21:45:28.321879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.726 [2024-07-15 21:45:28.321884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.726 [2024-07-15 21:45:28.321889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.726 [2024-07-15 21:45:28.321899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.726 qpair failed and we were unable to recover it. 00:29:38.726 [2024-07-15 21:45:28.331831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.726 [2024-07-15 21:45:28.331899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.726 [2024-07-15 21:45:28.331918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.726 [2024-07-15 21:45:28.331924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.726 [2024-07-15 21:45:28.331929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.726 [2024-07-15 21:45:28.331943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.726 qpair failed and we were unable to recover it. 00:29:38.726 [2024-07-15 21:45:28.341877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.726 [2024-07-15 21:45:28.341951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.726 [2024-07-15 21:45:28.341964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.726 [2024-07-15 21:45:28.341970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.726 [2024-07-15 21:45:28.341974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.726 [2024-07-15 21:45:28.341985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.726 qpair failed and we were unable to recover it. 00:29:38.726 [2024-07-15 21:45:28.351849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.726 [2024-07-15 21:45:28.351920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.726 [2024-07-15 21:45:28.351932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.726 [2024-07-15 21:45:28.351938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.726 [2024-07-15 21:45:28.351942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.726 [2024-07-15 21:45:28.351953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.726 qpair failed and we were unable to recover it. 00:29:38.726 [2024-07-15 21:45:28.361920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.726 [2024-07-15 21:45:28.361991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.726 [2024-07-15 21:45:28.362003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.726 [2024-07-15 21:45:28.362008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.726 [2024-07-15 21:45:28.362013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.726 [2024-07-15 21:45:28.362023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.726 qpair failed and we were unable to recover it. 00:29:38.726 [2024-07-15 21:45:28.371937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.726 [2024-07-15 21:45:28.371996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.726 [2024-07-15 21:45:28.372008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.726 [2024-07-15 21:45:28.372013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.726 [2024-07-15 21:45:28.372017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.726 [2024-07-15 21:45:28.372028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.726 qpair failed and we were unable to recover it. 00:29:38.726 [2024-07-15 21:45:28.381975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.726 [2024-07-15 21:45:28.382036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.726 [2024-07-15 21:45:28.382049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.726 [2024-07-15 21:45:28.382054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.726 [2024-07-15 21:45:28.382058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.726 [2024-07-15 21:45:28.382069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.726 qpair failed and we were unable to recover it. 00:29:38.726 [2024-07-15 21:45:28.391994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.726 [2024-07-15 21:45:28.392058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.726 [2024-07-15 21:45:28.392070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.726 [2024-07-15 21:45:28.392075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.726 [2024-07-15 21:45:28.392079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.726 [2024-07-15 21:45:28.392090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.726 qpair failed and we were unable to recover it. 00:29:38.726 [2024-07-15 21:45:28.402015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.726 [2024-07-15 21:45:28.402086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.727 [2024-07-15 21:45:28.402097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.727 [2024-07-15 21:45:28.402102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.727 [2024-07-15 21:45:28.402110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.727 [2024-07-15 21:45:28.402120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.727 qpair failed and we were unable to recover it. 00:29:38.727 [2024-07-15 21:45:28.411934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.727 [2024-07-15 21:45:28.411997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.727 [2024-07-15 21:45:28.412009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.727 [2024-07-15 21:45:28.412014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.727 [2024-07-15 21:45:28.412018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.727 [2024-07-15 21:45:28.412028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.727 qpair failed and we were unable to recover it. 00:29:38.727 [2024-07-15 21:45:28.422109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.727 [2024-07-15 21:45:28.422170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.727 [2024-07-15 21:45:28.422185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.727 [2024-07-15 21:45:28.422190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.727 [2024-07-15 21:45:28.422194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.727 [2024-07-15 21:45:28.422206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.727 qpair failed and we were unable to recover it. 00:29:38.727 [2024-07-15 21:45:28.432106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.727 [2024-07-15 21:45:28.432171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.727 [2024-07-15 21:45:28.432183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.727 [2024-07-15 21:45:28.432188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.727 [2024-07-15 21:45:28.432192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.727 [2024-07-15 21:45:28.432203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.727 qpair failed and we were unable to recover it. 00:29:38.727 [2024-07-15 21:45:28.442089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.727 [2024-07-15 21:45:28.442158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.727 [2024-07-15 21:45:28.442171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.727 [2024-07-15 21:45:28.442175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.727 [2024-07-15 21:45:28.442180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.727 [2024-07-15 21:45:28.442190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.727 qpair failed and we were unable to recover it. 00:29:38.727 [2024-07-15 21:45:28.452154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.727 [2024-07-15 21:45:28.452225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.727 [2024-07-15 21:45:28.452237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.727 [2024-07-15 21:45:28.452242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.727 [2024-07-15 21:45:28.452246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.727 [2024-07-15 21:45:28.452256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.727 qpair failed and we were unable to recover it. 00:29:38.727 [2024-07-15 21:45:28.462204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.727 [2024-07-15 21:45:28.462268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.727 [2024-07-15 21:45:28.462280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.727 [2024-07-15 21:45:28.462285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.727 [2024-07-15 21:45:28.462289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.727 [2024-07-15 21:45:28.462299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.727 qpair failed and we were unable to recover it. 00:29:38.727 [2024-07-15 21:45:28.472248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.727 [2024-07-15 21:45:28.472320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.727 [2024-07-15 21:45:28.472332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.727 [2024-07-15 21:45:28.472337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.727 [2024-07-15 21:45:28.472341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.727 [2024-07-15 21:45:28.472352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.727 qpair failed and we were unable to recover it. 00:29:38.727 [2024-07-15 21:45:28.482231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.727 [2024-07-15 21:45:28.482298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.727 [2024-07-15 21:45:28.482311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.727 [2024-07-15 21:45:28.482318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.727 [2024-07-15 21:45:28.482322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.727 [2024-07-15 21:45:28.482334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.727 qpair failed and we were unable to recover it. 00:29:38.727 [2024-07-15 21:45:28.492239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.727 [2024-07-15 21:45:28.492302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.727 [2024-07-15 21:45:28.492315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.727 [2024-07-15 21:45:28.492323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.727 [2024-07-15 21:45:28.492327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.727 [2024-07-15 21:45:28.492339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.727 qpair failed and we were unable to recover it. 00:29:38.727 [2024-07-15 21:45:28.502334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.727 [2024-07-15 21:45:28.502399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.727 [2024-07-15 21:45:28.502411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.727 [2024-07-15 21:45:28.502416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.727 [2024-07-15 21:45:28.502420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.727 [2024-07-15 21:45:28.502431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.727 qpair failed and we were unable to recover it. 00:29:38.727 [2024-07-15 21:45:28.512320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.727 [2024-07-15 21:45:28.512381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.727 [2024-07-15 21:45:28.512393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.727 [2024-07-15 21:45:28.512398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.727 [2024-07-15 21:45:28.512402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.727 [2024-07-15 21:45:28.512413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.727 qpair failed and we were unable to recover it. 00:29:38.727 [2024-07-15 21:45:28.522349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.727 [2024-07-15 21:45:28.522415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.727 [2024-07-15 21:45:28.522426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.727 [2024-07-15 21:45:28.522431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.727 [2024-07-15 21:45:28.522435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.727 [2024-07-15 21:45:28.522446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.727 qpair failed and we were unable to recover it. 00:29:38.989 [2024-07-15 21:45:28.532380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.989 [2024-07-15 21:45:28.532443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.989 [2024-07-15 21:45:28.532455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.989 [2024-07-15 21:45:28.532460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.989 [2024-07-15 21:45:28.532465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.989 [2024-07-15 21:45:28.532475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.989 qpair failed and we were unable to recover it. 00:29:38.989 [2024-07-15 21:45:28.542408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.989 [2024-07-15 21:45:28.542503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.989 [2024-07-15 21:45:28.542515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.989 [2024-07-15 21:45:28.542520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.989 [2024-07-15 21:45:28.542524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.989 [2024-07-15 21:45:28.542535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.989 qpair failed and we were unable to recover it. 00:29:38.989 [2024-07-15 21:45:28.552302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.989 [2024-07-15 21:45:28.552362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.989 [2024-07-15 21:45:28.552374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.989 [2024-07-15 21:45:28.552379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.989 [2024-07-15 21:45:28.552383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.989 [2024-07-15 21:45:28.552394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.989 qpair failed and we were unable to recover it. 00:29:38.989 [2024-07-15 21:45:28.562447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.989 [2024-07-15 21:45:28.562518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.989 [2024-07-15 21:45:28.562530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.989 [2024-07-15 21:45:28.562535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.989 [2024-07-15 21:45:28.562539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.989 [2024-07-15 21:45:28.562549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.989 qpair failed and we were unable to recover it. 00:29:38.989 [2024-07-15 21:45:28.572368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.989 [2024-07-15 21:45:28.572431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.989 [2024-07-15 21:45:28.572444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.989 [2024-07-15 21:45:28.572449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.989 [2024-07-15 21:45:28.572453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.989 [2024-07-15 21:45:28.572464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.989 qpair failed and we were unable to recover it. 00:29:38.989 [2024-07-15 21:45:28.582504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.989 [2024-07-15 21:45:28.582571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.989 [2024-07-15 21:45:28.582583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.989 [2024-07-15 21:45:28.582591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.989 [2024-07-15 21:45:28.582595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.989 [2024-07-15 21:45:28.582606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.989 qpair failed and we were unable to recover it. 00:29:38.989 [2024-07-15 21:45:28.592545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.989 [2024-07-15 21:45:28.592609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.989 [2024-07-15 21:45:28.592621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.989 [2024-07-15 21:45:28.592626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.989 [2024-07-15 21:45:28.592630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.989 [2024-07-15 21:45:28.592641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.989 qpair failed and we were unable to recover it. 00:29:38.989 [2024-07-15 21:45:28.602575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.989 [2024-07-15 21:45:28.602642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.989 [2024-07-15 21:45:28.602654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.989 [2024-07-15 21:45:28.602658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.989 [2024-07-15 21:45:28.602663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.989 [2024-07-15 21:45:28.602674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.989 qpair failed and we were unable to recover it. 00:29:38.989 [2024-07-15 21:45:28.612580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.612648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.612661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.612666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.612670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.612681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.990 qpair failed and we were unable to recover it. 00:29:38.990 [2024-07-15 21:45:28.622690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.622752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.622764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.622769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.622773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.622784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.990 qpair failed and we were unable to recover it. 00:29:38.990 [2024-07-15 21:45:28.632679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.632747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.632765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.632771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.632776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.632790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.990 qpair failed and we were unable to recover it. 00:29:38.990 [2024-07-15 21:45:28.642726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.642794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.642806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.642812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.642816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.642827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.990 qpair failed and we were unable to recover it. 00:29:38.990 [2024-07-15 21:45:28.652722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.652789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.652807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.652813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.652817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.652831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.990 qpair failed and we were unable to recover it. 00:29:38.990 [2024-07-15 21:45:28.662718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.662784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.662803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.662809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.662813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.662827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.990 qpair failed and we were unable to recover it. 00:29:38.990 [2024-07-15 21:45:28.672770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.672837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.672859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.672865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.672870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.672884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.990 qpair failed and we were unable to recover it. 00:29:38.990 [2024-07-15 21:45:28.682768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.682849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.682867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.682873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.682878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.682892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.990 qpair failed and we were unable to recover it. 00:29:38.990 [2024-07-15 21:45:28.692794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.692867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.692886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.692892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.692896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.692911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.990 qpair failed and we were unable to recover it. 00:29:38.990 [2024-07-15 21:45:28.702837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.702900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.702919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.702925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.702930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.702944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.990 qpair failed and we were unable to recover it. 00:29:38.990 [2024-07-15 21:45:28.712886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.712947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.712960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.712966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.712970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.712986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.990 qpair failed and we were unable to recover it. 00:29:38.990 [2024-07-15 21:45:28.722884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.722952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.722970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.722976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.722981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.722995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.990 qpair failed and we were unable to recover it. 00:29:38.990 [2024-07-15 21:45:28.732866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.732930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.732943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.732948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.732952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.732964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.990 qpair failed and we were unable to recover it. 00:29:38.990 [2024-07-15 21:45:28.742923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.743106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.743118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.743126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.743131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.743142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.990 qpair failed and we were unable to recover it. 00:29:38.990 [2024-07-15 21:45:28.752965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.753037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.753049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.753053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.753057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.753068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.990 qpair failed and we were unable to recover it. 00:29:38.990 [2024-07-15 21:45:28.763001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.763072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.763088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.763093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.763097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.763108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.990 qpair failed and we were unable to recover it. 00:29:38.990 [2024-07-15 21:45:28.773032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.990 [2024-07-15 21:45:28.773096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.990 [2024-07-15 21:45:28.773108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.990 [2024-07-15 21:45:28.773113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.990 [2024-07-15 21:45:28.773117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.990 [2024-07-15 21:45:28.773132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.991 qpair failed and we were unable to recover it. 00:29:38.991 [2024-07-15 21:45:28.783002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.991 [2024-07-15 21:45:28.783064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.991 [2024-07-15 21:45:28.783076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.991 [2024-07-15 21:45:28.783081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.991 [2024-07-15 21:45:28.783085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.991 [2024-07-15 21:45:28.783096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.991 qpair failed and we were unable to recover it. 00:29:38.991 [2024-07-15 21:45:28.793065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.991 [2024-07-15 21:45:28.793130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.991 [2024-07-15 21:45:28.793143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.991 [2024-07-15 21:45:28.793148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.991 [2024-07-15 21:45:28.793152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:38.991 [2024-07-15 21:45:28.793163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.991 qpair failed and we were unable to recover it. 00:29:39.252 [2024-07-15 21:45:28.803100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.252 [2024-07-15 21:45:28.803172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.252 [2024-07-15 21:45:28.803185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.252 [2024-07-15 21:45:28.803191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.252 [2024-07-15 21:45:28.803199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.252 [2024-07-15 21:45:28.803210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-07-15 21:45:28.813156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.252 [2024-07-15 21:45:28.813216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.252 [2024-07-15 21:45:28.813228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.252 [2024-07-15 21:45:28.813233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.252 [2024-07-15 21:45:28.813237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.252 [2024-07-15 21:45:28.813247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-07-15 21:45:28.823151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.252 [2024-07-15 21:45:28.823216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.252 [2024-07-15 21:45:28.823228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.252 [2024-07-15 21:45:28.823233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.252 [2024-07-15 21:45:28.823237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.252 [2024-07-15 21:45:28.823248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-07-15 21:45:28.833179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.252 [2024-07-15 21:45:28.833244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.252 [2024-07-15 21:45:28.833256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.252 [2024-07-15 21:45:28.833261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.252 [2024-07-15 21:45:28.833265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.252 [2024-07-15 21:45:28.833276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-07-15 21:45:28.843204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.252 [2024-07-15 21:45:28.843268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.252 [2024-07-15 21:45:28.843280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.252 [2024-07-15 21:45:28.843285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.252 [2024-07-15 21:45:28.843289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.252 [2024-07-15 21:45:28.843301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-07-15 21:45:28.853217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.252 [2024-07-15 21:45:28.853287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.252 [2024-07-15 21:45:28.853299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.252 [2024-07-15 21:45:28.853304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.252 [2024-07-15 21:45:28.853308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.252 [2024-07-15 21:45:28.853319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.253 [2024-07-15 21:45:28.863259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.253 [2024-07-15 21:45:28.863326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.253 [2024-07-15 21:45:28.863338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.253 [2024-07-15 21:45:28.863343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.253 [2024-07-15 21:45:28.863347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.253 [2024-07-15 21:45:28.863358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-07-15 21:45:28.873285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.253 [2024-07-15 21:45:28.873346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.253 [2024-07-15 21:45:28.873358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.253 [2024-07-15 21:45:28.873363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.253 [2024-07-15 21:45:28.873367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.253 [2024-07-15 21:45:28.873377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-07-15 21:45:28.883293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.253 [2024-07-15 21:45:28.883360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.253 [2024-07-15 21:45:28.883372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.253 [2024-07-15 21:45:28.883377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.253 [2024-07-15 21:45:28.883381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.253 [2024-07-15 21:45:28.883391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-07-15 21:45:28.893333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.253 [2024-07-15 21:45:28.893396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.253 [2024-07-15 21:45:28.893408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.253 [2024-07-15 21:45:28.893417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.253 [2024-07-15 21:45:28.893421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.253 [2024-07-15 21:45:28.893431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-07-15 21:45:28.903348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.253 [2024-07-15 21:45:28.903409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.253 [2024-07-15 21:45:28.903421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.253 [2024-07-15 21:45:28.903426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.253 [2024-07-15 21:45:28.903430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.253 [2024-07-15 21:45:28.903440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-07-15 21:45:28.913457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.253 [2024-07-15 21:45:28.913522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.253 [2024-07-15 21:45:28.913534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.253 [2024-07-15 21:45:28.913539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.253 [2024-07-15 21:45:28.913543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.253 [2024-07-15 21:45:28.913554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-07-15 21:45:28.923302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.253 [2024-07-15 21:45:28.923474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.253 [2024-07-15 21:45:28.923486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.253 [2024-07-15 21:45:28.923491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.253 [2024-07-15 21:45:28.923495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.253 [2024-07-15 21:45:28.923505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-07-15 21:45:28.933434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.253 [2024-07-15 21:45:28.933495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.253 [2024-07-15 21:45:28.933508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.253 [2024-07-15 21:45:28.933512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.253 [2024-07-15 21:45:28.933516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.253 [2024-07-15 21:45:28.933527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-07-15 21:45:28.943371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.253 [2024-07-15 21:45:28.943438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.253 [2024-07-15 21:45:28.943450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.253 [2024-07-15 21:45:28.943455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.253 [2024-07-15 21:45:28.943460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.253 [2024-07-15 21:45:28.943471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-07-15 21:45:28.953456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.253 [2024-07-15 21:45:28.953518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.253 [2024-07-15 21:45:28.953531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.253 [2024-07-15 21:45:28.953539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.253 [2024-07-15 21:45:28.953543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.253 [2024-07-15 21:45:28.953555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-07-15 21:45:28.963533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.253 [2024-07-15 21:45:28.963599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.253 [2024-07-15 21:45:28.963611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.253 [2024-07-15 21:45:28.963616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.253 [2024-07-15 21:45:28.963620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.253 [2024-07-15 21:45:28.963632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-07-15 21:45:28.973550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.253 [2024-07-15 21:45:28.973609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.253 [2024-07-15 21:45:28.973622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.253 [2024-07-15 21:45:28.973627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.253 [2024-07-15 21:45:28.973631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.253 [2024-07-15 21:45:28.973642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-07-15 21:45:28.983592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.253 [2024-07-15 21:45:28.983695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.253 [2024-07-15 21:45:28.983707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.253 [2024-07-15 21:45:28.983715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.253 [2024-07-15 21:45:28.983719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.253 [2024-07-15 21:45:28.983730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-07-15 21:45:28.993618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.253 [2024-07-15 21:45:28.993679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.253 [2024-07-15 21:45:28.993691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.253 [2024-07-15 21:45:28.993696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.253 [2024-07-15 21:45:28.993701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.253 [2024-07-15 21:45:28.993711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-07-15 21:45:29.003679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.254 [2024-07-15 21:45:29.003792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.254 [2024-07-15 21:45:29.003804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.254 [2024-07-15 21:45:29.003809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.254 [2024-07-15 21:45:29.003813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.254 [2024-07-15 21:45:29.003824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-07-15 21:45:29.013653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.254 [2024-07-15 21:45:29.013716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.254 [2024-07-15 21:45:29.013728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.254 [2024-07-15 21:45:29.013733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.254 [2024-07-15 21:45:29.013737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.254 [2024-07-15 21:45:29.013747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-07-15 21:45:29.023716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.254 [2024-07-15 21:45:29.023778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.254 [2024-07-15 21:45:29.023790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.254 [2024-07-15 21:45:29.023795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.254 [2024-07-15 21:45:29.023799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.254 [2024-07-15 21:45:29.023809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-07-15 21:45:29.033719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.254 [2024-07-15 21:45:29.033780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.254 [2024-07-15 21:45:29.033792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.254 [2024-07-15 21:45:29.033797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.254 [2024-07-15 21:45:29.033801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.254 [2024-07-15 21:45:29.033812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-07-15 21:45:29.043747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.254 [2024-07-15 21:45:29.043813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.254 [2024-07-15 21:45:29.043826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.254 [2024-07-15 21:45:29.043831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.254 [2024-07-15 21:45:29.043835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.254 [2024-07-15 21:45:29.043845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-07-15 21:45:29.053750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.254 [2024-07-15 21:45:29.053817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.254 [2024-07-15 21:45:29.053829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.254 [2024-07-15 21:45:29.053834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.254 [2024-07-15 21:45:29.053838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.254 [2024-07-15 21:45:29.053849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.517 [2024-07-15 21:45:29.063677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.517 [2024-07-15 21:45:29.063752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.517 [2024-07-15 21:45:29.063764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.517 [2024-07-15 21:45:29.063770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.517 [2024-07-15 21:45:29.063774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.517 [2024-07-15 21:45:29.063785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.517 qpair failed and we were unable to recover it. 00:29:39.517 [2024-07-15 21:45:29.073825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.517 [2024-07-15 21:45:29.073892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.517 [2024-07-15 21:45:29.073914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.517 [2024-07-15 21:45:29.073920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.517 [2024-07-15 21:45:29.073925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.517 [2024-07-15 21:45:29.073939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.517 qpair failed and we were unable to recover it. 00:29:39.517 [2024-07-15 21:45:29.083852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.517 [2024-07-15 21:45:29.083923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.517 [2024-07-15 21:45:29.083942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.517 [2024-07-15 21:45:29.083947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.517 [2024-07-15 21:45:29.083952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.517 [2024-07-15 21:45:29.083966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.517 qpair failed and we were unable to recover it. 00:29:39.517 [2024-07-15 21:45:29.093870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.517 [2024-07-15 21:45:29.093936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.517 [2024-07-15 21:45:29.093954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.517 [2024-07-15 21:45:29.093960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.517 [2024-07-15 21:45:29.093965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.517 [2024-07-15 21:45:29.093979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.517 qpair failed and we were unable to recover it. 00:29:39.517 [2024-07-15 21:45:29.103893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.517 [2024-07-15 21:45:29.103968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.517 [2024-07-15 21:45:29.103987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.517 [2024-07-15 21:45:29.103993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.517 [2024-07-15 21:45:29.103997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.517 [2024-07-15 21:45:29.104011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.517 qpair failed and we were unable to recover it. 00:29:39.517 [2024-07-15 21:45:29.113836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.517 [2024-07-15 21:45:29.113918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.517 [2024-07-15 21:45:29.113932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.517 [2024-07-15 21:45:29.113937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.517 [2024-07-15 21:45:29.113941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.517 [2024-07-15 21:45:29.113957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.517 qpair failed and we were unable to recover it. 00:29:39.517 [2024-07-15 21:45:29.123951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.517 [2024-07-15 21:45:29.124014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.517 [2024-07-15 21:45:29.124027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.517 [2024-07-15 21:45:29.124032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.517 [2024-07-15 21:45:29.124036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.517 [2024-07-15 21:45:29.124047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.517 qpair failed and we were unable to recover it. 00:29:39.517 [2024-07-15 21:45:29.133983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.517 [2024-07-15 21:45:29.134042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.517 [2024-07-15 21:45:29.134054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.517 [2024-07-15 21:45:29.134059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.517 [2024-07-15 21:45:29.134063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.517 [2024-07-15 21:45:29.134074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.517 qpair failed and we were unable to recover it. 00:29:39.517 [2024-07-15 21:45:29.143891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.517 [2024-07-15 21:45:29.143957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.517 [2024-07-15 21:45:29.143969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.517 [2024-07-15 21:45:29.143974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.517 [2024-07-15 21:45:29.143978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.517 [2024-07-15 21:45:29.143989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.517 qpair failed and we were unable to recover it. 00:29:39.517 [2024-07-15 21:45:29.154044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.517 [2024-07-15 21:45:29.154106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.517 [2024-07-15 21:45:29.154118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.517 [2024-07-15 21:45:29.154126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.517 [2024-07-15 21:45:29.154130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.517 [2024-07-15 21:45:29.154141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.517 qpair failed and we were unable to recover it. 00:29:39.517 [2024-07-15 21:45:29.164044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.517 [2024-07-15 21:45:29.164125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.517 [2024-07-15 21:45:29.164141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.517 [2024-07-15 21:45:29.164146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.517 [2024-07-15 21:45:29.164150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.517 [2024-07-15 21:45:29.164161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.517 qpair failed and we were unable to recover it. 00:29:39.517 [2024-07-15 21:45:29.174089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.517 [2024-07-15 21:45:29.174153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.517 [2024-07-15 21:45:29.174165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.517 [2024-07-15 21:45:29.174170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.517 [2024-07-15 21:45:29.174174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.517 [2024-07-15 21:45:29.174185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.517 qpair failed and we were unable to recover it. 00:29:39.517 [2024-07-15 21:45:29.184119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.517 [2024-07-15 21:45:29.184182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.517 [2024-07-15 21:45:29.184194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.517 [2024-07-15 21:45:29.184199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.518 [2024-07-15 21:45:29.184203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.518 [2024-07-15 21:45:29.184214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.518 qpair failed and we were unable to recover it. 00:29:39.518 [2024-07-15 21:45:29.194148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.518 [2024-07-15 21:45:29.194210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.518 [2024-07-15 21:45:29.194222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.518 [2024-07-15 21:45:29.194227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.518 [2024-07-15 21:45:29.194231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.518 [2024-07-15 21:45:29.194242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.518 qpair failed and we were unable to recover it. 00:29:39.518 [2024-07-15 21:45:29.204193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.518 [2024-07-15 21:45:29.204259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.518 [2024-07-15 21:45:29.204271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.518 [2024-07-15 21:45:29.204276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.518 [2024-07-15 21:45:29.204283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.518 [2024-07-15 21:45:29.204294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.518 qpair failed and we were unable to recover it. 00:29:39.518 [2024-07-15 21:45:29.214229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.518 [2024-07-15 21:45:29.214308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.518 [2024-07-15 21:45:29.214320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.518 [2024-07-15 21:45:29.214325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.518 [2024-07-15 21:45:29.214329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.518 [2024-07-15 21:45:29.214340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.518 qpair failed and we were unable to recover it. 00:29:39.518 [2024-07-15 21:45:29.224217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.518 [2024-07-15 21:45:29.224383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.518 [2024-07-15 21:45:29.224395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.518 [2024-07-15 21:45:29.224400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.518 [2024-07-15 21:45:29.224404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.518 [2024-07-15 21:45:29.224415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.518 qpair failed and we were unable to recover it. 00:29:39.518 [2024-07-15 21:45:29.234266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.518 [2024-07-15 21:45:29.234364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.518 [2024-07-15 21:45:29.234375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.518 [2024-07-15 21:45:29.234380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.518 [2024-07-15 21:45:29.234384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.518 [2024-07-15 21:45:29.234395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.518 qpair failed and we were unable to recover it. 00:29:39.518 [2024-07-15 21:45:29.244284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.518 [2024-07-15 21:45:29.244356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.518 [2024-07-15 21:45:29.244368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.518 [2024-07-15 21:45:29.244373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.518 [2024-07-15 21:45:29.244377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.518 [2024-07-15 21:45:29.244388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.518 qpair failed and we were unable to recover it. 00:29:39.518 [2024-07-15 21:45:29.254337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.518 [2024-07-15 21:45:29.254403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.518 [2024-07-15 21:45:29.254415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.518 [2024-07-15 21:45:29.254419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.518 [2024-07-15 21:45:29.254424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.518 [2024-07-15 21:45:29.254434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.518 qpair failed and we were unable to recover it. 00:29:39.518 [2024-07-15 21:45:29.264392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.518 [2024-07-15 21:45:29.264450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.518 [2024-07-15 21:45:29.264463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.518 [2024-07-15 21:45:29.264467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.518 [2024-07-15 21:45:29.264471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.518 [2024-07-15 21:45:29.264483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.518 qpair failed and we were unable to recover it. 00:29:39.518 [2024-07-15 21:45:29.274366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.518 [2024-07-15 21:45:29.274434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.518 [2024-07-15 21:45:29.274446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.518 [2024-07-15 21:45:29.274451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.518 [2024-07-15 21:45:29.274455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.518 [2024-07-15 21:45:29.274465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.518 qpair failed and we were unable to recover it. 00:29:39.518 [2024-07-15 21:45:29.284416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.518 [2024-07-15 21:45:29.284483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.518 [2024-07-15 21:45:29.284495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.518 [2024-07-15 21:45:29.284499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.518 [2024-07-15 21:45:29.284504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.518 [2024-07-15 21:45:29.284514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.518 qpair failed and we were unable to recover it. 00:29:39.518 [2024-07-15 21:45:29.294408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.518 [2024-07-15 21:45:29.294468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.518 [2024-07-15 21:45:29.294481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.518 [2024-07-15 21:45:29.294486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.518 [2024-07-15 21:45:29.294493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.518 [2024-07-15 21:45:29.294504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.518 qpair failed and we were unable to recover it. 00:29:39.518 [2024-07-15 21:45:29.304465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.518 [2024-07-15 21:45:29.304574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.518 [2024-07-15 21:45:29.304587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.518 [2024-07-15 21:45:29.304592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.518 [2024-07-15 21:45:29.304597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.518 [2024-07-15 21:45:29.304608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.518 qpair failed and we were unable to recover it. 00:29:39.518 [2024-07-15 21:45:29.314464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.518 [2024-07-15 21:45:29.314525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.518 [2024-07-15 21:45:29.314537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.518 [2024-07-15 21:45:29.314542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.518 [2024-07-15 21:45:29.314546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.518 [2024-07-15 21:45:29.314557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.518 qpair failed and we were unable to recover it. 00:29:39.780 [2024-07-15 21:45:29.324489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.780 [2024-07-15 21:45:29.324559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.780 [2024-07-15 21:45:29.324571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.780 [2024-07-15 21:45:29.324576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.780 [2024-07-15 21:45:29.324580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.780 [2024-07-15 21:45:29.324591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.780 qpair failed and we were unable to recover it. 00:29:39.780 [2024-07-15 21:45:29.334570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.780 [2024-07-15 21:45:29.334631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.780 [2024-07-15 21:45:29.334643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.780 [2024-07-15 21:45:29.334648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.780 [2024-07-15 21:45:29.334652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.780 [2024-07-15 21:45:29.334663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.780 qpair failed and we were unable to recover it. 00:29:39.780 [2024-07-15 21:45:29.344586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.780 [2024-07-15 21:45:29.344649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.780 [2024-07-15 21:45:29.344662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.780 [2024-07-15 21:45:29.344667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.780 [2024-07-15 21:45:29.344671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.780 [2024-07-15 21:45:29.344682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.780 qpair failed and we were unable to recover it. 00:29:39.780 [2024-07-15 21:45:29.354688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.780 [2024-07-15 21:45:29.354751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.780 [2024-07-15 21:45:29.354763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.780 [2024-07-15 21:45:29.354768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.780 [2024-07-15 21:45:29.354772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.780 [2024-07-15 21:45:29.354783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.780 qpair failed and we were unable to recover it. 00:29:39.780 [2024-07-15 21:45:29.364622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.780 [2024-07-15 21:45:29.364690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.780 [2024-07-15 21:45:29.364702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.780 [2024-07-15 21:45:29.364707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.780 [2024-07-15 21:45:29.364711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.780 [2024-07-15 21:45:29.364722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.780 qpair failed and we were unable to recover it. 00:29:39.780 [2024-07-15 21:45:29.374651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.780 [2024-07-15 21:45:29.374714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.780 [2024-07-15 21:45:29.374726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.780 [2024-07-15 21:45:29.374731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.780 [2024-07-15 21:45:29.374735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.780 [2024-07-15 21:45:29.374745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.780 qpair failed and we were unable to recover it. 00:29:39.780 [2024-07-15 21:45:29.384656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.780 [2024-07-15 21:45:29.384716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.780 [2024-07-15 21:45:29.384728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.780 [2024-07-15 21:45:29.384736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.781 [2024-07-15 21:45:29.384740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f299c000b90 00:29:39.781 [2024-07-15 21:45:29.384751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.781 qpair failed and we were unable to recover it. 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 [2024-07-15 21:45:29.385606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.781 [2024-07-15 21:45:29.394720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.781 [2024-07-15 21:45:29.394916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.781 [2024-07-15 21:45:29.394975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.781 [2024-07-15 21:45:29.394998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.781 [2024-07-15 21:45:29.395017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2994000b90 00:29:39.781 [2024-07-15 21:45:29.395063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.781 qpair failed and we were unable to recover it. 00:29:39.781 [2024-07-15 21:45:29.404726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.781 [2024-07-15 21:45:29.404873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.781 [2024-07-15 21:45:29.404906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.781 [2024-07-15 21:45:29.404921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.781 [2024-07-15 21:45:29.404941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2994000b90 00:29:39.781 [2024-07-15 21:45:29.404973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.781 qpair failed and we were unable to recover it. 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Read completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 Write completed with error (sct=0, sc=8) 00:29:39.781 starting I/O failed 00:29:39.781 [2024-07-15 21:45:29.405836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:39.781 [2024-07-15 21:45:29.414848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.781 [2024-07-15 21:45:29.415051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.781 [2024-07-15 21:45:29.415103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.781 [2024-07-15 21:45:29.415137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.781 [2024-07-15 21:45:29.415159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f29a4000b90 00:29:39.781 [2024-07-15 21:45:29.415206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:39.781 qpair failed and we were unable to recover it. 00:29:39.781 [2024-07-15 21:45:29.424858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.781 [2024-07-15 21:45:29.424992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.781 [2024-07-15 21:45:29.425022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.781 [2024-07-15 21:45:29.425036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.781 [2024-07-15 21:45:29.425048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f29a4000b90 00:29:39.781 [2024-07-15 21:45:29.425083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:39.781 qpair failed and we were unable to recover it. 00:29:39.781 [2024-07-15 21:45:29.425485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x654a30 is same with the state(5) to be set 00:29:39.781 [2024-07-15 21:45:29.434839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.781 [2024-07-15 21:45:29.434927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.781 [2024-07-15 21:45:29.434953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.781 [2024-07-15 21:45:29.434962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.781 [2024-07-15 21:45:29.434969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6469e0 00:29:39.781 [2024-07-15 21:45:29.434988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.781 qpair failed and we were unable to recover it. 00:29:39.781 [2024-07-15 21:45:29.444835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.781 [2024-07-15 21:45:29.444922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.781 [2024-07-15 21:45:29.444946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.781 [2024-07-15 21:45:29.444955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.781 [2024-07-15 21:45:29.444962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6469e0 00:29:39.782 [2024-07-15 21:45:29.444981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.782 qpair failed and we were unable to recover it. 00:29:39.782 [2024-07-15 21:45:29.445391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x654a30 (9): Bad file descriptor 00:29:39.782 Initializing NVMe Controllers 00:29:39.782 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:39.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:39.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:39.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:39.782 Initialization complete. Launching workers. 00:29:39.782 Starting thread on core 1 00:29:39.782 Starting thread on core 2 00:29:39.782 Starting thread on core 3 00:29:39.782 Starting thread on core 0 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:39.782 00:29:39.782 real 0m11.254s 00:29:39.782 user 0m20.862s 00:29:39.782 sys 0m3.903s 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.782 ************************************ 00:29:39.782 END TEST nvmf_target_disconnect_tc2 00:29:39.782 ************************************ 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:39.782 rmmod nvme_tcp 00:29:39.782 rmmod nvme_fabrics 00:29:39.782 rmmod nvme_keyring 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2368522 ']' 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2368522 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2368522 ']' 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2368522 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:39.782 21:45:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2368522 00:29:40.043 21:45:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:29:40.043 21:45:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:29:40.043 21:45:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2368522' 00:29:40.043 killing process with pid 2368522 00:29:40.043 21:45:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2368522 00:29:40.043 21:45:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2368522 00:29:40.043 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:40.043 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:40.043 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:40.043 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:40.043 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:40.043 21:45:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.043 21:45:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:40.043 21:45:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.591 21:45:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:42.591 00:29:42.591 real 0m21.199s 00:29:42.591 user 0m48.211s 00:29:42.591 sys 0m9.670s 00:29:42.591 21:45:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:42.591 21:45:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:42.591 ************************************ 00:29:42.591 END TEST nvmf_target_disconnect 00:29:42.591 ************************************ 00:29:42.591 21:45:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:42.591 21:45:31 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:42.591 21:45:31 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:42.591 21:45:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:42.591 21:45:31 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:42.591 00:29:42.591 real 22m40.753s 00:29:42.591 user 47m36.666s 00:29:42.591 sys 7m6.208s 00:29:42.591 21:45:31 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:42.591 21:45:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:42.591 ************************************ 00:29:42.591 END TEST nvmf_tcp 00:29:42.591 ************************************ 00:29:42.591 21:45:31 -- common/autotest_common.sh@1142 -- # return 0 00:29:42.591 21:45:31 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:42.591 21:45:31 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:42.591 21:45:31 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:42.591 21:45:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:42.591 21:45:31 -- common/autotest_common.sh@10 -- # set +x 00:29:42.591 ************************************ 00:29:42.591 START TEST spdkcli_nvmf_tcp 00:29:42.591 ************************************ 00:29:42.591 21:45:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:42.591 * Looking for test storage... 00:29:42.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2370353 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2370353 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2370353 ']' 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:42.591 21:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:42.591 [2024-07-15 21:45:32.191155] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:29:42.591 [2024-07-15 21:45:32.191226] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370353 ] 00:29:42.591 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.591 [2024-07-15 21:45:32.254791] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:42.591 [2024-07-15 21:45:32.329694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.591 [2024-07-15 21:45:32.329698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.162 21:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:43.163 21:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:29:43.163 21:45:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:43.163 21:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:43.163 21:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:43.424 21:45:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:43.424 21:45:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:43.424 21:45:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:43.424 21:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:43.424 21:45:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:43.424 21:45:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:43.424 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:43.424 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:43.424 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:43.424 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:43.424 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:43.424 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:43.424 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:43.424 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:43.424 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:43.424 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:43.424 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:43.424 ' 00:29:45.966 [2024-07-15 21:45:35.367365] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.906 [2024-07-15 21:45:36.667489] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:49.450 [2024-07-15 21:45:39.086626] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:51.992 [2024-07-15 21:45:41.168982] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:53.373 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:53.373 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:53.373 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:53.373 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:53.373 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:53.373 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:53.373 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:53.373 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:53.373 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:53.373 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:53.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:53.373 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:53.373 21:45:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:53.373 21:45:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:53.373 21:45:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:53.373 21:45:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:53.373 21:45:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:53.373 21:45:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:53.373 21:45:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:53.373 21:45:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:53.633 21:45:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:53.633 21:45:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:53.633 21:45:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:53.633 21:45:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:53.633 21:45:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:53.633 21:45:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:53.633 21:45:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:53.633 21:45:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:53.633 21:45:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:53.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:53.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:53.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:53.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:53.633 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:53.633 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:53.633 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:53.633 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:53.633 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:53.633 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:53.633 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:53.633 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:53.633 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:53.633 ' 00:29:58.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:58.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:58.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:58.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:58.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:58.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:58.918 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:58.918 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:58.918 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:58.918 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:58.918 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:58.918 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:58.918 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:58.918 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2370353 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2370353 ']' 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2370353 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2370353 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2370353' 00:29:58.918 killing process with pid 2370353 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2370353 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2370353 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2370353 ']' 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2370353 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2370353 ']' 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2370353 00:29:58.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2370353) - No such process 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2370353 is not found' 00:29:58.918 Process with pid 2370353 is not found 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:58.918 00:29:58.918 real 0m16.435s 00:29:58.918 user 0m35.001s 00:29:58.918 sys 0m0.866s 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:58.918 21:45:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:58.918 ************************************ 00:29:58.918 END TEST spdkcli_nvmf_tcp 00:29:58.918 ************************************ 00:29:58.918 21:45:48 -- common/autotest_common.sh@1142 -- # return 0 00:29:58.918 21:45:48 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:58.918 21:45:48 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:58.918 21:45:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:58.918 21:45:48 -- common/autotest_common.sh@10 -- # set +x 00:29:58.918 ************************************ 00:29:58.918 START TEST nvmf_identify_passthru 00:29:58.918 ************************************ 00:29:58.918 21:45:48 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:58.918 * Looking for test storage... 00:29:58.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:58.918 21:45:48 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.918 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:58.918 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.918 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.918 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.918 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.918 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.918 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.918 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.918 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.918 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.918 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.919 21:45:48 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.919 21:45:48 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.919 21:45:48 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.919 21:45:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.919 21:45:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.919 21:45:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.919 21:45:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:58.919 21:45:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:58.919 21:45:48 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.919 21:45:48 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.919 21:45:48 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.919 21:45:48 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.919 21:45:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.919 21:45:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.919 21:45:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.919 21:45:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:58.919 21:45:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.919 21:45:48 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.919 21:45:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:58.919 21:45:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:58.919 21:45:48 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:58.919 21:45:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.064 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:07.064 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:07.064 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:07.064 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:07.064 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:07.064 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:07.064 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:07.064 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:07.064 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:07.065 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:07.065 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:07.065 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:07.065 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:07.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:07.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:30:07.065 00:30:07.065 --- 10.0.0.2 ping statistics --- 00:30:07.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.065 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:07.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:07.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:30:07.065 00:30:07.065 --- 10.0.0.1 ping statistics --- 00:30:07.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.065 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:07.065 21:45:55 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:07.065 21:45:55 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:07.065 21:45:55 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:07.065 21:45:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.065 21:45:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:07.065 21:45:55 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:07.065 21:45:55 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:07.065 21:45:55 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:07.065 21:45:55 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:07.065 21:45:55 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:07.065 21:45:55 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:07.065 21:45:55 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:07.065 21:45:55 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:07.065 21:45:55 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:07.065 21:45:55 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:07.065 21:45:55 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:30:07.065 21:45:55 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:30:07.065 21:45:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:07.065 21:45:55 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:07.065 21:45:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:07.065 21:45:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:07.065 21:45:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:07.065 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.065 21:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:30:07.065 21:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:07.065 21:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:07.065 21:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:07.065 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.327 21:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:07.327 21:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:07.327 21:45:56 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:07.327 21:45:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.327 21:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:07.327 21:45:56 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:07.327 21:45:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.327 21:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2377420 00:30:07.327 21:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:07.327 21:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:07.327 21:45:56 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2377420 00:30:07.327 21:45:56 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2377420 ']' 00:30:07.327 21:45:56 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.327 21:45:56 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:07.327 21:45:56 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.327 21:45:56 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:07.327 21:45:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.327 [2024-07-15 21:45:57.009479] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:30:07.327 [2024-07-15 21:45:57.009576] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.327 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.327 [2024-07-15 21:45:57.082740] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:07.611 [2024-07-15 21:45:57.154634] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.611 [2024-07-15 21:45:57.154672] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.611 [2024-07-15 21:45:57.154680] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.611 [2024-07-15 21:45:57.154686] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.611 [2024-07-15 21:45:57.154692] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.611 [2024-07-15 21:45:57.154827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.611 [2024-07-15 21:45:57.154951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:07.611 [2024-07-15 21:45:57.155108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.611 [2024-07-15 21:45:57.155109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:08.236 21:45:57 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:08.236 21:45:57 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:30:08.236 21:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:08.236 21:45:57 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.236 21:45:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:08.236 INFO: Log level set to 20 00:30:08.236 INFO: Requests: 00:30:08.236 { 00:30:08.236 "jsonrpc": "2.0", 00:30:08.236 "method": "nvmf_set_config", 00:30:08.236 "id": 1, 00:30:08.236 "params": { 00:30:08.236 "admin_cmd_passthru": { 00:30:08.236 "identify_ctrlr": true 00:30:08.236 } 00:30:08.236 } 00:30:08.236 } 00:30:08.236 00:30:08.236 INFO: response: 00:30:08.236 { 00:30:08.236 "jsonrpc": "2.0", 00:30:08.236 "id": 1, 00:30:08.236 "result": true 00:30:08.236 } 00:30:08.236 00:30:08.236 21:45:57 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.236 21:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:08.236 21:45:57 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.236 21:45:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:08.236 INFO: Setting log level to 20 00:30:08.236 INFO: Setting log level to 20 00:30:08.236 INFO: Log level set to 20 00:30:08.236 INFO: Log level set to 20 00:30:08.236 INFO: Requests: 00:30:08.236 { 00:30:08.236 "jsonrpc": "2.0", 00:30:08.236 "method": "framework_start_init", 00:30:08.236 "id": 1 00:30:08.236 } 00:30:08.236 00:30:08.236 INFO: Requests: 00:30:08.236 { 00:30:08.236 "jsonrpc": "2.0", 00:30:08.236 "method": "framework_start_init", 00:30:08.236 "id": 1 00:30:08.236 } 00:30:08.236 00:30:08.236 [2024-07-15 21:45:57.863546] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:08.236 INFO: response: 00:30:08.236 { 00:30:08.236 "jsonrpc": "2.0", 00:30:08.236 "id": 1, 00:30:08.236 "result": true 00:30:08.236 } 00:30:08.236 00:30:08.236 INFO: response: 00:30:08.236 { 00:30:08.236 "jsonrpc": "2.0", 00:30:08.236 "id": 1, 00:30:08.236 "result": true 00:30:08.236 } 00:30:08.236 00:30:08.236 21:45:57 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.236 21:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:08.236 21:45:57 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.236 21:45:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:08.236 INFO: Setting log level to 40 00:30:08.236 INFO: Setting log level to 40 00:30:08.236 INFO: Setting log level to 40 00:30:08.236 [2024-07-15 21:45:57.876857] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.236 21:45:57 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.236 21:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:08.236 21:45:57 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:08.236 21:45:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:08.236 21:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:08.236 21:45:57 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.236 21:45:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:08.497 Nvme0n1 00:30:08.497 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.497 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:08.497 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.497 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:08.497 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.497 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:08.497 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.497 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:08.497 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.497 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:08.497 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.497 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:08.497 [2024-07-15 21:45:58.261418] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.497 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.497 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:08.497 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.497 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:08.497 [ 00:30:08.497 { 00:30:08.497 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:08.497 "subtype": "Discovery", 00:30:08.497 "listen_addresses": [], 00:30:08.497 "allow_any_host": true, 00:30:08.497 "hosts": [] 00:30:08.497 }, 00:30:08.497 { 00:30:08.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:08.497 "subtype": "NVMe", 00:30:08.497 "listen_addresses": [ 00:30:08.497 { 00:30:08.497 "trtype": "TCP", 00:30:08.497 "adrfam": "IPv4", 00:30:08.497 "traddr": "10.0.0.2", 00:30:08.497 "trsvcid": "4420" 00:30:08.497 } 00:30:08.497 ], 00:30:08.497 "allow_any_host": true, 00:30:08.497 "hosts": [], 00:30:08.497 "serial_number": "SPDK00000000000001", 00:30:08.497 "model_number": "SPDK bdev Controller", 00:30:08.497 "max_namespaces": 1, 00:30:08.497 "min_cntlid": 1, 00:30:08.497 "max_cntlid": 65519, 00:30:08.497 "namespaces": [ 00:30:08.497 { 00:30:08.497 "nsid": 1, 00:30:08.497 "bdev_name": "Nvme0n1", 00:30:08.497 "name": "Nvme0n1", 00:30:08.497 "nguid": "36344730526054870025384500000044", 00:30:08.497 "uuid": "36344730-5260-5487-0025-384500000044" 00:30:08.497 } 00:30:08.497 ] 00:30:08.497 } 00:30:08.497 ] 00:30:08.497 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.497 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:08.497 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:08.497 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:08.759 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.759 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:30:08.759 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:08.759 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:08.759 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:09.021 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.021 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:09.021 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:30:09.021 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:09.021 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:09.021 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.021 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:09.021 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.021 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:09.021 21:45:58 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:09.021 21:45:58 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:09.021 21:45:58 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:09.021 21:45:58 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:09.021 21:45:58 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:09.021 21:45:58 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:09.021 21:45:58 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:09.021 rmmod nvme_tcp 00:30:09.021 rmmod nvme_fabrics 00:30:09.021 rmmod nvme_keyring 00:30:09.021 21:45:58 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:09.021 21:45:58 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:09.021 21:45:58 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:09.021 21:45:58 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2377420 ']' 00:30:09.021 21:45:58 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2377420 00:30:09.021 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2377420 ']' 00:30:09.021 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2377420 00:30:09.021 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:30:09.021 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:09.021 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2377420 00:30:09.021 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:09.021 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:09.021 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2377420' 00:30:09.021 killing process with pid 2377420 00:30:09.021 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2377420 00:30:09.021 21:45:58 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2377420 00:30:09.281 21:45:59 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:09.281 21:45:59 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:09.281 21:45:59 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:09.281 21:45:59 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:09.281 21:45:59 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:09.281 21:45:59 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.281 21:45:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:09.282 21:45:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.827 21:46:01 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:11.827 00:30:11.827 real 0m12.646s 00:30:11.827 user 0m10.034s 00:30:11.827 sys 0m6.093s 00:30:11.827 21:46:01 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:11.827 21:46:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:11.827 ************************************ 00:30:11.827 END TEST nvmf_identify_passthru 00:30:11.827 ************************************ 00:30:11.827 21:46:01 -- common/autotest_common.sh@1142 -- # return 0 00:30:11.827 21:46:01 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:11.827 21:46:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:11.827 21:46:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:11.827 21:46:01 -- common/autotest_common.sh@10 -- # set +x 00:30:11.827 ************************************ 00:30:11.827 START TEST nvmf_dif 00:30:11.827 ************************************ 00:30:11.827 21:46:01 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:11.827 * Looking for test storage... 00:30:11.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:11.827 21:46:01 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.827 21:46:01 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:11.827 21:46:01 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.827 21:46:01 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.827 21:46:01 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.827 21:46:01 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.827 21:46:01 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.827 21:46:01 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.827 21:46:01 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.827 21:46:01 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.827 21:46:01 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.827 21:46:01 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.827 21:46:01 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:11.827 21:46:01 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:11.827 21:46:01 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.828 21:46:01 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.828 21:46:01 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.828 21:46:01 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.828 21:46:01 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.828 21:46:01 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.828 21:46:01 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.828 21:46:01 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:11.828 21:46:01 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:11.828 21:46:01 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:11.828 21:46:01 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:11.828 21:46:01 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:11.828 21:46:01 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:11.828 21:46:01 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.828 21:46:01 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:11.828 21:46:01 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:11.828 21:46:01 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:11.828 21:46:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:19.969 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:19.969 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:19.969 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:19.969 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:19.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:19.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:30:19.969 00:30:19.969 --- 10.0.0.2 ping statistics --- 00:30:19.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.969 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:19.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:19.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:30:19.969 00:30:19.969 --- 10.0.0.1 ping statistics --- 00:30:19.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.969 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:19.969 21:46:08 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:21.887 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:21.887 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:21.888 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:21.888 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:21.888 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:21.888 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:21.888 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:21.888 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:21.888 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:21.888 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:21.888 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:21.888 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:21.888 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:21.888 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:21.888 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:21.888 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:21.888 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:22.458 21:46:11 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:22.458 21:46:11 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:22.458 21:46:11 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:22.458 21:46:11 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:22.458 21:46:11 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:22.458 21:46:11 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:22.458 21:46:12 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:22.458 21:46:12 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:22.458 21:46:12 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:22.458 21:46:12 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:22.458 21:46:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:22.458 21:46:12 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2383271 00:30:22.458 21:46:12 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2383271 00:30:22.458 21:46:12 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:22.458 21:46:12 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2383271 ']' 00:30:22.458 21:46:12 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.458 21:46:12 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:22.458 21:46:12 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.458 21:46:12 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:22.458 21:46:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:22.458 [2024-07-15 21:46:12.085719] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:30:22.458 [2024-07-15 21:46:12.085781] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.458 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.458 [2024-07-15 21:46:12.155014] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.458 [2024-07-15 21:46:12.228204] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.458 [2024-07-15 21:46:12.228240] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.458 [2024-07-15 21:46:12.228247] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.458 [2024-07-15 21:46:12.228254] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.458 [2024-07-15 21:46:12.228259] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.458 [2024-07-15 21:46:12.228278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.398 21:46:12 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:23.398 21:46:12 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:30:23.398 21:46:12 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:23.398 21:46:12 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:23.398 21:46:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:23.398 21:46:12 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.398 21:46:12 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:23.398 21:46:12 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:23.398 21:46:12 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.398 21:46:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:23.398 [2024-07-15 21:46:12.891305] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.398 21:46:12 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.398 21:46:12 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:23.398 21:46:12 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:23.398 21:46:12 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:23.398 21:46:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:23.398 ************************************ 00:30:23.398 START TEST fio_dif_1_default 00:30:23.398 ************************************ 00:30:23.398 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:30:23.398 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:23.398 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:23.398 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:23.398 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:23.399 bdev_null0 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:23.399 [2024-07-15 21:46:12.979645] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:23.399 { 00:30:23.399 "params": { 00:30:23.399 "name": "Nvme$subsystem", 00:30:23.399 "trtype": "$TEST_TRANSPORT", 00:30:23.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.399 "adrfam": "ipv4", 00:30:23.399 "trsvcid": "$NVMF_PORT", 00:30:23.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.399 "hdgst": ${hdgst:-false}, 00:30:23.399 "ddgst": ${ddgst:-false} 00:30:23.399 }, 00:30:23.399 "method": "bdev_nvme_attach_controller" 00:30:23.399 } 00:30:23.399 EOF 00:30:23.399 )") 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:23.399 21:46:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:23.399 "params": { 00:30:23.399 "name": "Nvme0", 00:30:23.399 "trtype": "tcp", 00:30:23.399 "traddr": "10.0.0.2", 00:30:23.399 "adrfam": "ipv4", 00:30:23.399 "trsvcid": "4420", 00:30:23.399 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:23.399 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:23.399 "hdgst": false, 00:30:23.399 "ddgst": false 00:30:23.399 }, 00:30:23.399 "method": "bdev_nvme_attach_controller" 00:30:23.399 }' 00:30:23.399 21:46:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:23.399 21:46:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:23.399 21:46:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:23.399 21:46:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:23.399 21:46:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:23.399 21:46:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:23.399 21:46:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:23.399 21:46:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:23.399 21:46:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:23.399 21:46:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:23.662 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:23.662 fio-3.35 00:30:23.662 Starting 1 thread 00:30:23.662 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.888 00:30:35.888 filename0: (groupid=0, jobs=1): err= 0: pid=2383804: Mon Jul 15 21:46:24 2024 00:30:35.888 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10019msec) 00:30:35.888 slat (nsec): min=5406, max=42386, avg=6221.38, stdev=1510.31 00:30:35.888 clat (usec): min=1122, max=43540, avg=21574.90, stdev=20064.74 00:30:35.888 lat (usec): min=1130, max=43582, avg=21581.12, stdev=20064.74 00:30:35.888 clat percentiles (usec): 00:30:35.888 | 1.00th=[ 1303], 5.00th=[ 1385], 10.00th=[ 1401], 20.00th=[ 1434], 00:30:35.888 | 30.00th=[ 1450], 40.00th=[ 1467], 50.00th=[41157], 60.00th=[41681], 00:30:35.888 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:35.888 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:30:35.888 | 99.99th=[43779] 00:30:35.888 bw ( KiB/s): min= 672, max= 768, per=99.87%, avg=740.80, stdev=33.28, samples=20 00:30:35.888 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:30:35.888 lat (msec) : 2=49.78%, 50=50.22% 00:30:35.888 cpu : usr=95.18%, sys=4.63%, ctx=14, majf=0, minf=239 00:30:35.888 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.888 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.888 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:35.888 00:30:35.888 Run status group 0 (all jobs): 00:30:35.889 READ: bw=741KiB/s (759kB/s), 741KiB/s-741KiB/s (759kB/s-759kB/s), io=7424KiB (7602kB), run=10019-10019msec 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.889 00:30:35.889 real 0m11.245s 00:30:35.889 user 0m25.421s 00:30:35.889 sys 0m0.834s 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:35.889 ************************************ 00:30:35.889 END TEST fio_dif_1_default 00:30:35.889 ************************************ 00:30:35.889 21:46:24 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:35.889 21:46:24 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:35.889 21:46:24 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:35.889 21:46:24 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:35.889 21:46:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:35.889 ************************************ 00:30:35.889 START TEST fio_dif_1_multi_subsystems 00:30:35.889 ************************************ 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:35.889 bdev_null0 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:35.889 [2024-07-15 21:46:24.303422] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:35.889 bdev_null1 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:35.889 { 00:30:35.889 "params": { 00:30:35.889 "name": "Nvme$subsystem", 00:30:35.889 "trtype": "$TEST_TRANSPORT", 00:30:35.889 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.889 "adrfam": "ipv4", 00:30:35.889 "trsvcid": "$NVMF_PORT", 00:30:35.889 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.889 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.889 "hdgst": ${hdgst:-false}, 00:30:35.889 "ddgst": ${ddgst:-false} 00:30:35.889 }, 00:30:35.889 "method": "bdev_nvme_attach_controller" 00:30:35.889 } 00:30:35.889 EOF 00:30:35.889 )") 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:35.889 { 00:30:35.889 "params": { 00:30:35.889 "name": "Nvme$subsystem", 00:30:35.889 "trtype": "$TEST_TRANSPORT", 00:30:35.889 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.889 "adrfam": "ipv4", 00:30:35.889 "trsvcid": "$NVMF_PORT", 00:30:35.889 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.889 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.889 "hdgst": ${hdgst:-false}, 00:30:35.889 "ddgst": ${ddgst:-false} 00:30:35.889 }, 00:30:35.889 "method": "bdev_nvme_attach_controller" 00:30:35.889 } 00:30:35.889 EOF 00:30:35.889 )") 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:35.889 "params": { 00:30:35.889 "name": "Nvme0", 00:30:35.889 "trtype": "tcp", 00:30:35.889 "traddr": "10.0.0.2", 00:30:35.889 "adrfam": "ipv4", 00:30:35.889 "trsvcid": "4420", 00:30:35.889 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:35.889 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:35.889 "hdgst": false, 00:30:35.889 "ddgst": false 00:30:35.889 }, 00:30:35.889 "method": "bdev_nvme_attach_controller" 00:30:35.889 },{ 00:30:35.889 "params": { 00:30:35.889 "name": "Nvme1", 00:30:35.889 "trtype": "tcp", 00:30:35.889 "traddr": "10.0.0.2", 00:30:35.889 "adrfam": "ipv4", 00:30:35.889 "trsvcid": "4420", 00:30:35.889 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:35.889 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:35.889 "hdgst": false, 00:30:35.889 "ddgst": false 00:30:35.889 }, 00:30:35.889 "method": "bdev_nvme_attach_controller" 00:30:35.889 }' 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:35.889 21:46:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:35.889 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:35.889 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:35.889 fio-3.35 00:30:35.889 Starting 2 threads 00:30:35.889 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.950 00:30:45.950 filename0: (groupid=0, jobs=1): err= 0: pid=2386317: Mon Jul 15 21:46:35 2024 00:30:45.950 read: IOPS=185, BW=742KiB/s (760kB/s)(7440KiB/10027msec) 00:30:45.950 slat (nsec): min=5409, max=52383, avg=6292.34, stdev=1972.37 00:30:45.950 clat (usec): min=1273, max=43540, avg=21545.23, stdev=20087.93 00:30:45.950 lat (usec): min=1279, max=43576, avg=21551.52, stdev=20087.91 00:30:45.950 clat percentiles (usec): 00:30:45.950 | 1.00th=[ 1303], 5.00th=[ 1369], 10.00th=[ 1385], 20.00th=[ 1401], 00:30:45.950 | 30.00th=[ 1434], 40.00th=[ 1450], 50.00th=[41681], 60.00th=[41681], 00:30:45.950 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:45.950 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:30:45.950 | 99.99th=[43779] 00:30:45.950 bw ( KiB/s): min= 704, max= 768, per=66.15%, avg=742.40, stdev=32.17, samples=20 00:30:45.950 iops : min= 176, max= 192, avg=185.60, stdev= 8.04, samples=20 00:30:45.950 lat (msec) : 2=49.89%, 50=50.11% 00:30:45.950 cpu : usr=96.85%, sys=2.95%, ctx=15, majf=0, minf=190 00:30:45.950 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:45.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.950 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:45.950 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:45.950 filename1: (groupid=0, jobs=1): err= 0: pid=2386318: Mon Jul 15 21:46:35 2024 00:30:45.950 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10042msec) 00:30:45.950 slat (nsec): min=5401, max=51265, avg=5822.06, stdev=1889.03 00:30:45.950 clat (usec): min=41714, max=43462, avg=41998.58, stdev=140.39 00:30:45.950 lat (usec): min=41720, max=43514, avg=42004.40, stdev=141.06 00:30:45.950 clat percentiles (usec): 00:30:45.950 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:45.950 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:45.950 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:45.950 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:30:45.950 | 99.99th=[43254] 00:30:45.950 bw ( KiB/s): min= 352, max= 384, per=33.88%, avg=380.80, stdev= 9.85, samples=20 00:30:45.950 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:30:45.950 lat (msec) : 50=100.00% 00:30:45.950 cpu : usr=96.78%, sys=3.03%, ctx=9, majf=0, minf=151 00:30:45.950 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:45.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.950 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:45.950 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:45.950 00:30:45.950 Run status group 0 (all jobs): 00:30:45.950 READ: bw=1122KiB/s (1149kB/s), 381KiB/s-742KiB/s (390kB/s-760kB/s), io=11.0MiB (11.5MB), run=10027-10042msec 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.210 00:30:46.210 real 0m11.658s 00:30:46.210 user 0m36.376s 00:30:46.210 sys 0m0.979s 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:46.210 21:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:46.210 ************************************ 00:30:46.210 END TEST fio_dif_1_multi_subsystems 00:30:46.210 ************************************ 00:30:46.210 21:46:35 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:46.210 21:46:35 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:46.210 21:46:35 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:46.210 21:46:35 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:46.210 21:46:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:46.210 ************************************ 00:30:46.210 START TEST fio_dif_rand_params 00:30:46.210 ************************************ 00:30:46.210 21:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:30:46.210 21:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:46.210 21:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:46.210 21:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:46.210 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:46.210 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:46.210 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:46.210 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:46.210 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:46.210 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:46.210 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:46.210 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:46.210 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:46.210 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:46.210 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.210 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.210 bdev_null0 00:30:46.210 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.210 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:46.210 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.470 [2024-07-15 21:46:36.045430] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:46.470 21:46:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:46.470 { 00:30:46.470 "params": { 00:30:46.471 "name": "Nvme$subsystem", 00:30:46.471 "trtype": "$TEST_TRANSPORT", 00:30:46.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.471 "adrfam": "ipv4", 00:30:46.471 "trsvcid": "$NVMF_PORT", 00:30:46.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.471 "hdgst": ${hdgst:-false}, 00:30:46.471 "ddgst": ${ddgst:-false} 00:30:46.471 }, 00:30:46.471 "method": "bdev_nvme_attach_controller" 00:30:46.471 } 00:30:46.471 EOF 00:30:46.471 )") 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:46.471 "params": { 00:30:46.471 "name": "Nvme0", 00:30:46.471 "trtype": "tcp", 00:30:46.471 "traddr": "10.0.0.2", 00:30:46.471 "adrfam": "ipv4", 00:30:46.471 "trsvcid": "4420", 00:30:46.471 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:46.471 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:46.471 "hdgst": false, 00:30:46.471 "ddgst": false 00:30:46.471 }, 00:30:46.471 "method": "bdev_nvme_attach_controller" 00:30:46.471 }' 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:46.471 21:46:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.730 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:46.730 ... 00:30:46.730 fio-3.35 00:30:46.730 Starting 3 threads 00:30:46.730 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.309 00:30:53.309 filename0: (groupid=0, jobs=1): err= 0: pid=2388513: Mon Jul 15 21:46:41 2024 00:30:53.309 read: IOPS=80, BW=10.1MiB/s (10.5MB/s)(50.6MiB/5033msec) 00:30:53.309 slat (nsec): min=5443, max=33759, avg=8012.55, stdev=2131.60 00:30:53.309 clat (usec): min=7496, max=56984, avg=37260.32, stdev=20092.89 00:30:53.309 lat (usec): min=7504, max=57018, avg=37268.33, stdev=20093.14 00:30:53.309 clat percentiles (usec): 00:30:53.309 | 1.00th=[ 7963], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10814], 00:30:53.309 | 30.00th=[12387], 40.00th=[50070], 50.00th=[51643], 60.00th=[51643], 00:30:53.309 | 70.00th=[52691], 80.00th=[52691], 90.00th=[53216], 95.00th=[54264], 00:30:53.309 | 99.00th=[55313], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:30:53.309 | 99.99th=[56886] 00:30:53.309 bw ( KiB/s): min= 7680, max=15360, per=29.53%, avg=10291.20, stdev=2539.44, samples=10 00:30:53.309 iops : min= 60, max= 120, avg=80.40, stdev=19.84, samples=10 00:30:53.309 lat (msec) : 10=13.33%, 20=22.96%, 50=2.47%, 100=61.23% 00:30:53.309 cpu : usr=96.84%, sys=2.86%, ctx=9, majf=0, minf=159 00:30:53.309 IO depths : 1=17.0%, 2=83.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.309 issued rwts: total=405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.309 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:53.309 filename0: (groupid=0, jobs=1): err= 0: pid=2388514: Mon Jul 15 21:46:41 2024 00:30:53.309 read: IOPS=91, BW=11.5MiB/s (12.0MB/s)(57.8MiB/5036msec) 00:30:53.309 slat (nsec): min=5427, max=32670, avg=7512.63, stdev=2028.50 00:30:53.309 clat (usec): min=7378, max=92985, avg=32681.22, stdev=21566.52 00:30:53.309 lat (usec): min=7384, max=92991, avg=32688.74, stdev=21566.63 00:30:53.309 clat percentiles (usec): 00:30:53.309 | 1.00th=[ 7701], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 9634], 00:30:53.309 | 30.00th=[10945], 40.00th=[12518], 50.00th=[50070], 60.00th=[51643], 00:30:53.309 | 70.00th=[52167], 80.00th=[52691], 90.00th=[53216], 95.00th=[53740], 00:30:53.310 | 99.00th=[55837], 99.50th=[92799], 99.90th=[92799], 99.95th=[92799], 00:30:53.310 | 99.99th=[92799] 00:30:53.310 bw ( KiB/s): min= 6912, max=17664, per=33.72%, avg=11750.40, stdev=3435.55, samples=10 00:30:53.310 iops : min= 54, max= 138, avg=91.80, stdev=26.84, samples=10 00:30:53.310 lat (msec) : 10=23.38%, 20=24.03%, 50=1.52%, 100=51.08% 00:30:53.310 cpu : usr=96.70%, sys=3.00%, ctx=8, majf=0, minf=101 00:30:53.310 IO depths : 1=10.8%, 2=89.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.310 issued rwts: total=462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.310 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:53.310 filename0: (groupid=0, jobs=1): err= 0: pid=2388515: Mon Jul 15 21:46:41 2024 00:30:53.310 read: IOPS=100, BW=12.6MiB/s (13.2MB/s)(63.0MiB/5016msec) 00:30:53.310 slat (nsec): min=5434, max=32013, avg=7332.14, stdev=1994.19 00:30:53.310 clat (usec): min=6584, max=94724, avg=29827.59, stdev=21933.73 00:30:53.310 lat (usec): min=6590, max=94730, avg=29834.92, stdev=21934.12 00:30:53.310 clat percentiles (usec): 00:30:53.310 | 1.00th=[ 7373], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9241], 00:30:53.310 | 30.00th=[10290], 40.00th=[11469], 50.00th=[12911], 60.00th=[51119], 00:30:53.310 | 70.00th=[52167], 80.00th=[52691], 90.00th=[53216], 95.00th=[54264], 00:30:53.310 | 99.00th=[58983], 99.50th=[92799], 99.90th=[94897], 99.95th=[94897], 00:30:53.310 | 99.99th=[94897] 00:30:53.310 bw ( KiB/s): min= 6925, max=25088, per=36.81%, avg=12826.90, stdev=4960.03, samples=10 00:30:53.310 iops : min= 54, max= 196, avg=100.20, stdev=38.76, samples=10 00:30:53.310 lat (msec) : 10=26.19%, 20=28.37%, 50=1.39%, 100=44.05% 00:30:53.310 cpu : usr=96.57%, sys=3.11%, ctx=10, majf=0, minf=123 00:30:53.310 IO depths : 1=7.9%, 2=92.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.310 issued rwts: total=504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.310 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:53.310 00:30:53.310 Run status group 0 (all jobs): 00:30:53.310 READ: bw=34.0MiB/s (35.7MB/s), 10.1MiB/s-12.6MiB/s (10.5MB/s-13.2MB/s), io=171MiB (180MB), run=5016-5036msec 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.310 bdev_null0 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.310 [2024-07-15 21:46:42.160768] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.310 bdev_null1 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.310 bdev_null2 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.310 { 00:30:53.310 "params": { 00:30:53.310 "name": "Nvme$subsystem", 00:30:53.310 "trtype": "$TEST_TRANSPORT", 00:30:53.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.310 "adrfam": "ipv4", 00:30:53.310 "trsvcid": "$NVMF_PORT", 00:30:53.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.310 "hdgst": ${hdgst:-false}, 00:30:53.310 "ddgst": ${ddgst:-false} 00:30:53.310 }, 00:30:53.310 "method": "bdev_nvme_attach_controller" 00:30:53.310 } 00:30:53.310 EOF 00:30:53.310 )") 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:53.310 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.311 { 00:30:53.311 "params": { 00:30:53.311 "name": "Nvme$subsystem", 00:30:53.311 "trtype": "$TEST_TRANSPORT", 00:30:53.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.311 "adrfam": "ipv4", 00:30:53.311 "trsvcid": "$NVMF_PORT", 00:30:53.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.311 "hdgst": ${hdgst:-false}, 00:30:53.311 "ddgst": ${ddgst:-false} 00:30:53.311 }, 00:30:53.311 "method": "bdev_nvme_attach_controller" 00:30:53.311 } 00:30:53.311 EOF 00:30:53.311 )") 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.311 { 00:30:53.311 "params": { 00:30:53.311 "name": "Nvme$subsystem", 00:30:53.311 "trtype": "$TEST_TRANSPORT", 00:30:53.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.311 "adrfam": "ipv4", 00:30:53.311 "trsvcid": "$NVMF_PORT", 00:30:53.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.311 "hdgst": ${hdgst:-false}, 00:30:53.311 "ddgst": ${ddgst:-false} 00:30:53.311 }, 00:30:53.311 "method": "bdev_nvme_attach_controller" 00:30:53.311 } 00:30:53.311 EOF 00:30:53.311 )") 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:53.311 "params": { 00:30:53.311 "name": "Nvme0", 00:30:53.311 "trtype": "tcp", 00:30:53.311 "traddr": "10.0.0.2", 00:30:53.311 "adrfam": "ipv4", 00:30:53.311 "trsvcid": "4420", 00:30:53.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:53.311 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:53.311 "hdgst": false, 00:30:53.311 "ddgst": false 00:30:53.311 }, 00:30:53.311 "method": "bdev_nvme_attach_controller" 00:30:53.311 },{ 00:30:53.311 "params": { 00:30:53.311 "name": "Nvme1", 00:30:53.311 "trtype": "tcp", 00:30:53.311 "traddr": "10.0.0.2", 00:30:53.311 "adrfam": "ipv4", 00:30:53.311 "trsvcid": "4420", 00:30:53.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:53.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:53.311 "hdgst": false, 00:30:53.311 "ddgst": false 00:30:53.311 }, 00:30:53.311 "method": "bdev_nvme_attach_controller" 00:30:53.311 },{ 00:30:53.311 "params": { 00:30:53.311 "name": "Nvme2", 00:30:53.311 "trtype": "tcp", 00:30:53.311 "traddr": "10.0.0.2", 00:30:53.311 "adrfam": "ipv4", 00:30:53.311 "trsvcid": "4420", 00:30:53.311 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:53.311 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:53.311 "hdgst": false, 00:30:53.311 "ddgst": false 00:30:53.311 }, 00:30:53.311 "method": "bdev_nvme_attach_controller" 00:30:53.311 }' 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:53.311 21:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.311 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:53.311 ... 00:30:53.311 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:53.311 ... 00:30:53.311 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:53.311 ... 00:30:53.311 fio-3.35 00:30:53.311 Starting 24 threads 00:30:53.311 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.589 00:31:05.589 filename0: (groupid=0, jobs=1): err= 0: pid=2390023: Mon Jul 15 21:46:53 2024 00:31:05.589 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.3MiB/10028msec) 00:31:05.589 slat (usec): min=5, max=118, avg=18.10, stdev=16.07 00:31:05.589 clat (usec): min=1668, max=58818, avg=32357.10, stdev=7342.42 00:31:05.589 lat (usec): min=1683, max=58825, avg=32375.20, stdev=7342.79 00:31:05.589 clat percentiles (usec): 00:31:05.589 | 1.00th=[ 2573], 5.00th=[23462], 10.00th=[28181], 20.00th=[31327], 00:31:05.589 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:05.589 | 70.00th=[32375], 80.00th=[33162], 90.00th=[38011], 95.00th=[47973], 00:31:05.589 | 99.00th=[54264], 99.50th=[55837], 99.90th=[58983], 99.95th=[58983], 00:31:05.589 | 99.99th=[58983] 00:31:05.590 bw ( KiB/s): min= 1616, max= 3072, per=4.13%, avg=1969.00, stdev=284.13, samples=20 00:31:05.590 iops : min= 404, max= 768, avg=492.25, stdev=71.03, samples=20 00:31:05.590 lat (msec) : 2=0.04%, 4=1.72%, 10=1.09%, 20=1.19%, 50=92.25% 00:31:05.590 lat (msec) : 100=3.71% 00:31:05.590 cpu : usr=98.51%, sys=1.15%, ctx=47, majf=0, minf=20 00:31:05.590 IO depths : 1=3.0%, 2=6.2%, 4=14.8%, 8=65.1%, 16=11.0%, 32=0.0%, >=64=0.0% 00:31:05.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 complete : 0=0.0%, 4=91.8%, 8=3.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 issued rwts: total=4939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.590 filename0: (groupid=0, jobs=1): err= 0: pid=2390024: Mon Jul 15 21:46:53 2024 00:31:05.590 read: IOPS=500, BW=2003KiB/s (2051kB/s)(19.6MiB/10032msec) 00:31:05.590 slat (usec): min=5, max=102, avg=28.14, stdev=17.17 00:31:05.590 clat (usec): min=6884, max=56804, avg=31676.23, stdev=2102.06 00:31:05.590 lat (usec): min=6899, max=56835, avg=31704.37, stdev=2101.72 00:31:05.590 clat percentiles (usec): 00:31:05.590 | 1.00th=[24773], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:31:05.590 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:05.590 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:05.590 | 99.00th=[36439], 99.50th=[37487], 99.90th=[51643], 99.95th=[56886], 00:31:05.590 | 99.99th=[56886] 00:31:05.590 bw ( KiB/s): min= 1916, max= 2176, per=4.20%, avg=2005.40, stdev=71.11, samples=20 00:31:05.590 iops : min= 479, max= 544, avg=501.35, stdev=17.78, samples=20 00:31:05.590 lat (msec) : 10=0.32%, 50=99.48%, 100=0.20% 00:31:05.590 cpu : usr=98.64%, sys=0.98%, ctx=87, majf=0, minf=30 00:31:05.590 IO depths : 1=5.6%, 2=11.3%, 4=24.0%, 8=52.1%, 16=7.0%, 32=0.0%, >=64=0.0% 00:31:05.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.590 filename0: (groupid=0, jobs=1): err= 0: pid=2390025: Mon Jul 15 21:46:53 2024 00:31:05.590 read: IOPS=460, BW=1841KiB/s (1885kB/s)(18.0MiB/10008msec) 00:31:05.590 slat (nsec): min=5566, max=98032, avg=16062.92, stdev=13874.44 00:31:05.590 clat (usec): min=7301, max=62003, avg=34686.19, stdev=7487.61 00:31:05.590 lat (usec): min=7310, max=62009, avg=34702.25, stdev=7486.38 00:31:05.590 clat percentiles (usec): 00:31:05.590 | 1.00th=[13435], 5.00th=[25560], 10.00th=[30016], 20.00th=[31589], 00:31:05.590 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:05.590 | 70.00th=[35390], 80.00th=[39584], 90.00th=[46924], 95.00th=[51119], 00:31:05.590 | 99.00th=[55837], 99.50th=[56886], 99.90th=[61080], 99.95th=[61080], 00:31:05.590 | 99.99th=[62129] 00:31:05.590 bw ( KiB/s): min= 1696, max= 1944, per=3.86%, avg=1839.74, stdev=70.84, samples=19 00:31:05.590 iops : min= 424, max= 486, avg=459.89, stdev=17.73, samples=19 00:31:05.590 lat (msec) : 10=0.09%, 20=1.74%, 50=91.53%, 100=6.64% 00:31:05.590 cpu : usr=98.91%, sys=0.79%, ctx=16, majf=0, minf=25 00:31:05.590 IO depths : 1=0.3%, 2=0.8%, 4=8.7%, 8=75.5%, 16=14.6%, 32=0.0%, >=64=0.0% 00:31:05.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 complete : 0=0.0%, 4=90.6%, 8=5.9%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 issued rwts: total=4605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.590 filename0: (groupid=0, jobs=1): err= 0: pid=2390026: Mon Jul 15 21:46:53 2024 00:31:05.590 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10010msec) 00:31:05.590 slat (nsec): min=5616, max=98566, avg=27047.81, stdev=15001.86 00:31:05.590 clat (usec): min=25396, max=35295, avg=31751.77, stdev=638.50 00:31:05.590 lat (usec): min=25404, max=35319, avg=31778.82, stdev=637.99 00:31:05.590 clat percentiles (usec): 00:31:05.590 | 1.00th=[30540], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:31:05.590 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:05.590 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32113], 95.00th=[32637], 00:31:05.590 | 99.00th=[33424], 99.50th=[33424], 99.90th=[35390], 99.95th=[35390], 00:31:05.590 | 99.99th=[35390] 00:31:05.590 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=2000.58, stdev=63.24, samples=19 00:31:05.590 iops : min= 480, max= 512, avg=500.11, stdev=15.78, samples=19 00:31:05.590 lat (msec) : 50=100.00% 00:31:05.590 cpu : usr=99.10%, sys=0.59%, ctx=14, majf=0, minf=19 00:31:05.590 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:05.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.590 filename0: (groupid=0, jobs=1): err= 0: pid=2390027: Mon Jul 15 21:46:53 2024 00:31:05.590 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10016msec) 00:31:05.590 slat (usec): min=5, max=112, avg=16.86, stdev=15.27 00:31:05.590 clat (usec): min=24914, max=41307, avg=31858.18, stdev=932.62 00:31:05.590 lat (usec): min=24933, max=41326, avg=31875.05, stdev=930.87 00:31:05.590 clat percentiles (usec): 00:31:05.590 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31589], 00:31:05.590 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:05.590 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32113], 95.00th=[32637], 00:31:05.590 | 99.00th=[33162], 99.50th=[40109], 99.90th=[41157], 99.95th=[41157], 00:31:05.590 | 99.99th=[41157] 00:31:05.590 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1996.95, stdev=64.15, samples=20 00:31:05.590 iops : min= 480, max= 512, avg=499.20, stdev=16.08, samples=20 00:31:05.590 lat (msec) : 50=100.00% 00:31:05.590 cpu : usr=99.07%, sys=0.64%, ctx=11, majf=0, minf=23 00:31:05.590 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:05.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.590 filename0: (groupid=0, jobs=1): err= 0: pid=2390028: Mon Jul 15 21:46:53 2024 00:31:05.590 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10008msec) 00:31:05.590 slat (usec): min=5, max=100, avg=29.39, stdev=14.39 00:31:05.590 clat (usec): min=9963, max=55143, avg=31706.33, stdev=1902.29 00:31:05.590 lat (usec): min=9979, max=55165, avg=31735.72, stdev=1902.12 00:31:05.590 clat percentiles (usec): 00:31:05.590 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:31:05.590 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:05.590 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32375], 00:31:05.590 | 99.00th=[33162], 99.50th=[33424], 99.90th=[55313], 99.95th=[55313], 00:31:05.590 | 99.99th=[55313] 00:31:05.590 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1994.26, stdev=77.54, samples=19 00:31:05.590 iops : min= 448, max= 512, avg=498.53, stdev=19.42, samples=19 00:31:05.590 lat (msec) : 10=0.06%, 20=0.26%, 50=99.36%, 100=0.32% 00:31:05.590 cpu : usr=99.09%, sys=0.58%, ctx=52, majf=0, minf=30 00:31:05.590 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:05.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.590 filename0: (groupid=0, jobs=1): err= 0: pid=2390029: Mon Jul 15 21:46:53 2024 00:31:05.590 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10014msec) 00:31:05.590 slat (usec): min=4, max=104, avg=28.50, stdev=20.39 00:31:05.590 clat (usec): min=24934, max=42262, avg=31757.42, stdev=795.34 00:31:05.590 lat (usec): min=24966, max=42276, avg=31785.92, stdev=793.32 00:31:05.590 clat percentiles (usec): 00:31:05.590 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:31:05.590 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:05.590 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32375], 00:31:05.590 | 99.00th=[33162], 99.50th=[33424], 99.90th=[40109], 99.95th=[40109], 00:31:05.590 | 99.99th=[42206] 00:31:05.590 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1996.95, stdev=64.15, samples=20 00:31:05.590 iops : min= 480, max= 512, avg=499.20, stdev=16.08, samples=20 00:31:05.590 lat (msec) : 50=100.00% 00:31:05.590 cpu : usr=99.23%, sys=0.47%, ctx=11, majf=0, minf=23 00:31:05.590 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:05.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.590 filename0: (groupid=0, jobs=1): err= 0: pid=2390030: Mon Jul 15 21:46:53 2024 00:31:05.590 read: IOPS=503, BW=2015KiB/s (2063kB/s)(19.7MiB/10017msec) 00:31:05.590 slat (usec): min=5, max=119, avg=18.38, stdev=15.75 00:31:05.590 clat (usec): min=8669, max=43036, avg=31600.86, stdev=1991.14 00:31:05.590 lat (usec): min=8677, max=43042, avg=31619.24, stdev=1992.04 00:31:05.590 clat percentiles (usec): 00:31:05.590 | 1.00th=[23200], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:31:05.590 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:05.590 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32375], 00:31:05.590 | 99.00th=[33817], 99.50th=[34866], 99.90th=[39584], 99.95th=[43254], 00:31:05.590 | 99.99th=[43254] 00:31:05.590 bw ( KiB/s): min= 1920, max= 2224, per=4.22%, avg=2012.15, stdev=79.21, samples=20 00:31:05.590 iops : min= 480, max= 556, avg=503.00, stdev=19.85, samples=20 00:31:05.590 lat (msec) : 10=0.20%, 20=0.73%, 50=99.07% 00:31:05.590 cpu : usr=96.64%, sys=1.68%, ctx=129, majf=0, minf=19 00:31:05.590 IO depths : 1=6.0%, 2=12.1%, 4=24.4%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:05.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 issued rwts: total=5046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.590 filename1: (groupid=0, jobs=1): err= 0: pid=2390031: Mon Jul 15 21:46:53 2024 00:31:05.590 read: IOPS=504, BW=2017KiB/s (2066kB/s)(19.7MiB/10006msec) 00:31:05.590 slat (usec): min=5, max=207, avg=11.83, stdev= 8.70 00:31:05.590 clat (usec): min=5140, max=59024, avg=31630.71, stdev=2308.25 00:31:05.590 lat (usec): min=5151, max=59032, avg=31642.54, stdev=2307.22 00:31:05.590 clat percentiles (usec): 00:31:05.590 | 1.00th=[22676], 5.00th=[30540], 10.00th=[31065], 20.00th=[31589], 00:31:05.590 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:05.590 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:05.590 | 99.00th=[33817], 99.50th=[33817], 99.90th=[36439], 99.95th=[36963], 00:31:05.590 | 99.99th=[58983] 00:31:05.590 bw ( KiB/s): min= 1920, max= 2080, per=4.23%, avg=2016.58, stdev=58.23, samples=19 00:31:05.590 iops : min= 480, max= 520, avg=504.11, stdev=14.55, samples=19 00:31:05.590 lat (msec) : 10=0.44%, 20=0.36%, 50=99.17%, 100=0.04% 00:31:05.590 cpu : usr=98.44%, sys=0.91%, ctx=15, majf=0, minf=21 00:31:05.590 IO depths : 1=4.8%, 2=11.0%, 4=24.8%, 8=51.7%, 16=7.7%, 32=0.0%, >=64=0.0% 00:31:05.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.590 issued rwts: total=5046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.590 filename1: (groupid=0, jobs=1): err= 0: pid=2390032: Mon Jul 15 21:46:53 2024 00:31:05.590 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10014msec) 00:31:05.590 slat (usec): min=5, max=119, avg=33.01, stdev=20.34 00:31:05.590 clat (usec): min=20240, max=42724, avg=31697.91, stdev=885.75 00:31:05.590 lat (usec): min=20248, max=42782, avg=31730.92, stdev=885.68 00:31:05.590 clat percentiles (usec): 00:31:05.590 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:31:05.590 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:05.590 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32375], 00:31:05.590 | 99.00th=[33424], 99.50th=[33817], 99.90th=[39584], 99.95th=[39584], 00:31:05.590 | 99.99th=[42730] 00:31:05.590 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1996.95, stdev=64.15, samples=20 00:31:05.590 iops : min= 480, max= 512, avg=499.20, stdev=16.08, samples=20 00:31:05.590 lat (msec) : 50=100.00% 00:31:05.590 cpu : usr=94.39%, sys=2.78%, ctx=211, majf=0, minf=23 00:31:05.590 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:05.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.591 filename1: (groupid=0, jobs=1): err= 0: pid=2390033: Mon Jul 15 21:46:53 2024 00:31:05.591 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10015msec) 00:31:05.591 slat (usec): min=4, max=115, avg=23.96, stdev=20.20 00:31:05.591 clat (usec): min=24881, max=40361, avg=31803.39, stdev=789.49 00:31:05.591 lat (usec): min=24888, max=40376, avg=31827.34, stdev=787.18 00:31:05.591 clat percentiles (usec): 00:31:05.591 | 1.00th=[30540], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:31:05.591 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:05.591 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:05.591 | 99.00th=[33162], 99.50th=[33424], 99.90th=[40109], 99.95th=[40109], 00:31:05.591 | 99.99th=[40109] 00:31:05.591 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1996.80, stdev=64.34, samples=20 00:31:05.591 iops : min= 480, max= 512, avg=499.20, stdev=16.08, samples=20 00:31:05.591 lat (msec) : 50=100.00% 00:31:05.591 cpu : usr=97.27%, sys=1.38%, ctx=62, majf=0, minf=24 00:31:05.591 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:05.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.591 filename1: (groupid=0, jobs=1): err= 0: pid=2390034: Mon Jul 15 21:46:53 2024 00:31:05.591 read: IOPS=499, BW=1999KiB/s (2047kB/s)(19.6MiB/10019msec) 00:31:05.591 slat (usec): min=5, max=138, avg=18.84, stdev=19.13 00:31:05.591 clat (usec): min=25159, max=46969, avg=31855.98, stdev=970.53 00:31:05.591 lat (usec): min=25167, max=46987, avg=31874.82, stdev=969.08 00:31:05.591 clat percentiles (usec): 00:31:05.591 | 1.00th=[30540], 5.00th=[30802], 10.00th=[31065], 20.00th=[31589], 00:31:05.591 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:05.591 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:05.591 | 99.00th=[33424], 99.50th=[34341], 99.90th=[44303], 99.95th=[44303], 00:31:05.591 | 99.99th=[46924] 00:31:05.591 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1996.30, stdev=63.67, samples=20 00:31:05.591 iops : min= 480, max= 512, avg=499.00, stdev=15.94, samples=20 00:31:05.591 lat (msec) : 50=100.00% 00:31:05.591 cpu : usr=98.42%, sys=0.88%, ctx=179, majf=0, minf=24 00:31:05.591 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:05.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.591 filename1: (groupid=0, jobs=1): err= 0: pid=2390035: Mon Jul 15 21:46:53 2024 00:31:05.591 read: IOPS=502, BW=2010KiB/s (2058kB/s)(19.6MiB/10010msec) 00:31:05.591 slat (usec): min=5, max=114, avg=31.02, stdev=19.56 00:31:05.591 clat (usec): min=6747, max=56416, avg=31546.80, stdev=2451.25 00:31:05.591 lat (usec): min=6772, max=56433, avg=31577.82, stdev=2452.27 00:31:05.591 clat percentiles (usec): 00:31:05.591 | 1.00th=[20055], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:31:05.591 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:05.591 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32637], 00:31:05.591 | 99.00th=[36963], 99.50th=[38536], 99.90th=[52167], 99.95th=[52167], 00:31:05.591 | 99.99th=[56361] 00:31:05.591 bw ( KiB/s): min= 1840, max= 2176, per=4.20%, avg=2003.11, stdev=81.54, samples=19 00:31:05.591 iops : min= 460, max= 544, avg=500.74, stdev=20.37, samples=19 00:31:05.591 lat (msec) : 10=0.40%, 20=0.60%, 50=98.69%, 100=0.32% 00:31:05.591 cpu : usr=96.95%, sys=1.52%, ctx=35, majf=0, minf=19 00:31:05.591 IO depths : 1=5.3%, 2=11.1%, 4=23.5%, 8=52.6%, 16=7.5%, 32=0.0%, >=64=0.0% 00:31:05.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 issued rwts: total=5030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.591 filename1: (groupid=0, jobs=1): err= 0: pid=2390036: Mon Jul 15 21:46:53 2024 00:31:05.591 read: IOPS=480, BW=1921KiB/s (1967kB/s)(18.8MiB/10016msec) 00:31:05.591 slat (usec): min=5, max=120, avg=18.02, stdev=16.60 00:31:05.591 clat (usec): min=7090, max=60296, avg=33209.45, stdev=7068.93 00:31:05.591 lat (usec): min=7105, max=60304, avg=33227.47, stdev=7067.99 00:31:05.591 clat percentiles (usec): 00:31:05.591 | 1.00th=[12125], 5.00th=[23462], 10.00th=[30016], 20.00th=[31327], 00:31:05.591 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:05.591 | 70.00th=[32375], 80.00th=[34866], 90.00th=[41157], 95.00th=[50070], 00:31:05.591 | 99.00th=[55313], 99.50th=[56886], 99.90th=[57934], 99.95th=[58459], 00:31:05.591 | 99.99th=[60556] 00:31:05.591 bw ( KiB/s): min= 1792, max= 2048, per=4.03%, avg=1920.15, stdev=82.00, samples=20 00:31:05.591 iops : min= 448, max= 512, avg=480.00, stdev=20.56, samples=20 00:31:05.591 lat (msec) : 10=0.56%, 20=3.24%, 50=90.71%, 100=5.49% 00:31:05.591 cpu : usr=99.11%, sys=0.59%, ctx=13, majf=0, minf=21 00:31:05.591 IO depths : 1=0.2%, 2=1.4%, 4=9.0%, 8=73.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:31:05.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 complete : 0=0.0%, 4=91.5%, 8=5.2%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 issued rwts: total=4810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.591 filename1: (groupid=0, jobs=1): err= 0: pid=2390037: Mon Jul 15 21:46:53 2024 00:31:05.591 read: IOPS=499, BW=1998KiB/s (2046kB/s)(19.5MiB/10008msec) 00:31:05.591 slat (usec): min=5, max=119, avg=28.15, stdev=19.41 00:31:05.591 clat (usec): min=8604, max=59553, avg=31734.39, stdev=2794.80 00:31:05.591 lat (usec): min=8613, max=59562, avg=31762.54, stdev=2795.24 00:31:05.591 clat percentiles (usec): 00:31:05.591 | 1.00th=[25297], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:31:05.591 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:05.591 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32637], 00:31:05.591 | 99.00th=[34866], 99.50th=[54789], 99.90th=[59507], 99.95th=[59507], 00:31:05.591 | 99.99th=[59507] 00:31:05.591 bw ( KiB/s): min= 1760, max= 2048, per=4.17%, avg=1990.89, stdev=81.50, samples=19 00:31:05.591 iops : min= 440, max= 512, avg=497.68, stdev=20.41, samples=19 00:31:05.591 lat (msec) : 10=0.46%, 50=98.88%, 100=0.66% 00:31:05.591 cpu : usr=99.18%, sys=0.53%, ctx=11, majf=0, minf=29 00:31:05.591 IO depths : 1=5.9%, 2=11.9%, 4=24.1%, 8=51.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:05.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 issued rwts: total=5000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.591 filename1: (groupid=0, jobs=1): err= 0: pid=2390038: Mon Jul 15 21:46:53 2024 00:31:05.591 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10009msec) 00:31:05.591 slat (nsec): min=5573, max=72838, avg=11965.82, stdev=9024.93 00:31:05.591 clat (usec): min=14471, max=56967, avg=31861.20, stdev=1674.72 00:31:05.591 lat (usec): min=14477, max=56983, avg=31873.17, stdev=1674.77 00:31:05.591 clat percentiles (usec): 00:31:05.591 | 1.00th=[30016], 5.00th=[30802], 10.00th=[31065], 20.00th=[31589], 00:31:05.591 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:05.591 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:05.591 | 99.00th=[33424], 99.50th=[36963], 99.90th=[52691], 99.95th=[52691], 00:31:05.591 | 99.99th=[56886] 00:31:05.591 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1993.89, stdev=77.91, samples=19 00:31:05.591 iops : min= 448, max= 512, avg=498.47, stdev=19.48, samples=19 00:31:05.591 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:31:05.591 cpu : usr=99.11%, sys=0.57%, ctx=46, majf=0, minf=20 00:31:05.591 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:05.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.591 filename2: (groupid=0, jobs=1): err= 0: pid=2390039: Mon Jul 15 21:46:53 2024 00:31:05.591 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10004msec) 00:31:05.591 slat (usec): min=5, max=106, avg=25.48, stdev=16.70 00:31:05.591 clat (usec): min=3889, max=55817, avg=31642.04, stdev=3069.26 00:31:05.591 lat (usec): min=3907, max=55830, avg=31667.52, stdev=3069.71 00:31:05.591 clat percentiles (usec): 00:31:05.591 | 1.00th=[18482], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:31:05.591 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:05.591 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32900], 00:31:05.591 | 99.00th=[39060], 99.50th=[47973], 99.90th=[54789], 99.95th=[55313], 00:31:05.591 | 99.99th=[55837] 00:31:05.591 bw ( KiB/s): min= 1916, max= 2176, per=4.21%, avg=2007.37, stdev=72.50, samples=19 00:31:05.591 iops : min= 479, max= 544, avg=501.84, stdev=18.12, samples=19 00:31:05.591 lat (msec) : 4=0.16%, 10=0.48%, 20=0.68%, 50=98.57%, 100=0.12% 00:31:05.591 cpu : usr=98.77%, sys=0.81%, ctx=98, majf=0, minf=28 00:31:05.591 IO depths : 1=5.3%, 2=10.8%, 4=23.5%, 8=53.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:31:05.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.591 filename2: (groupid=0, jobs=1): err= 0: pid=2390040: Mon Jul 15 21:46:53 2024 00:31:05.591 read: IOPS=503, BW=2015KiB/s (2063kB/s)(19.7MiB/10022msec) 00:31:05.591 slat (usec): min=5, max=123, avg=15.68, stdev=16.06 00:31:05.591 clat (usec): min=14962, max=47619, avg=31641.38, stdev=2360.61 00:31:05.591 lat (usec): min=14969, max=47635, avg=31657.06, stdev=2361.24 00:31:05.591 clat percentiles (usec): 00:31:05.591 | 1.00th=[20055], 5.00th=[29754], 10.00th=[30802], 20.00th=[31327], 00:31:05.591 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:05.591 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:05.591 | 99.00th=[37487], 99.50th=[40633], 99.90th=[45351], 99.95th=[45351], 00:31:05.591 | 99.99th=[47449] 00:31:05.591 bw ( KiB/s): min= 1920, max= 2240, per=4.22%, avg=2012.15, stdev=81.44, samples=20 00:31:05.591 iops : min= 480, max= 560, avg=503.00, stdev=20.35, samples=20 00:31:05.591 lat (msec) : 20=0.91%, 50=99.09% 00:31:05.591 cpu : usr=98.99%, sys=0.70%, ctx=20, majf=0, minf=19 00:31:05.591 IO depths : 1=4.4%, 2=10.4%, 4=24.1%, 8=53.0%, 16=8.1%, 32=0.0%, >=64=0.0% 00:31:05.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 issued rwts: total=5048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.591 filename2: (groupid=0, jobs=1): err= 0: pid=2390041: Mon Jul 15 21:46:53 2024 00:31:05.591 read: IOPS=492, BW=1968KiB/s (2015kB/s)(19.2MiB/10008msec) 00:31:05.591 slat (usec): min=5, max=117, avg=20.60, stdev=18.32 00:31:05.591 clat (usec): min=7893, max=89602, avg=32433.03, stdev=5620.39 00:31:05.591 lat (usec): min=7899, max=89622, avg=32453.63, stdev=5619.32 00:31:05.591 clat percentiles (usec): 00:31:05.591 | 1.00th=[11994], 5.00th=[30540], 10.00th=[31065], 20.00th=[31589], 00:31:05.591 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:05.591 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[39060], 00:31:05.591 | 99.00th=[53740], 99.50th=[56886], 99.90th=[81265], 99.95th=[89654], 00:31:05.591 | 99.99th=[89654] 00:31:05.591 bw ( KiB/s): min= 1664, max= 2048, per=4.11%, avg=1960.63, stdev=101.44, samples=19 00:31:05.591 iops : min= 416, max= 512, avg=490.16, stdev=25.36, samples=19 00:31:05.591 lat (msec) : 10=0.12%, 20=1.69%, 50=95.11%, 100=3.09% 00:31:05.591 cpu : usr=97.52%, sys=1.41%, ctx=1113, majf=0, minf=27 00:31:05.591 IO depths : 1=0.3%, 2=0.5%, 4=3.3%, 8=78.7%, 16=17.2%, 32=0.0%, >=64=0.0% 00:31:05.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 complete : 0=0.0%, 4=90.0%, 8=9.0%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.591 issued rwts: total=4924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.591 filename2: (groupid=0, jobs=1): err= 0: pid=2390042: Mon Jul 15 21:46:53 2024 00:31:05.591 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10008msec) 00:31:05.591 slat (usec): min=5, max=111, avg=30.25, stdev=19.48 00:31:05.591 clat (usec): min=9338, max=55386, avg=31767.65, stdev=2367.16 00:31:05.592 lat (usec): min=9349, max=55404, avg=31797.91, stdev=2366.47 00:31:05.592 clat percentiles (usec): 00:31:05.592 | 1.00th=[28967], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:31:05.592 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:05.592 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32637], 00:31:05.592 | 99.00th=[40109], 99.50th=[51643], 99.90th=[55313], 99.95th=[55313], 00:31:05.592 | 99.99th=[55313] 00:31:05.592 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1987.53, stdev=89.05, samples=19 00:31:05.592 iops : min= 448, max= 512, avg=496.84, stdev=22.29, samples=19 00:31:05.592 lat (msec) : 10=0.32%, 50=99.16%, 100=0.52% 00:31:05.592 cpu : usr=99.20%, sys=0.49%, ctx=11, majf=0, minf=25 00:31:05.592 IO depths : 1=6.1%, 2=12.1%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:05.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.592 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.592 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.592 filename2: (groupid=0, jobs=1): err= 0: pid=2390043: Mon Jul 15 21:46:53 2024 00:31:05.592 read: IOPS=505, BW=2021KiB/s (2070kB/s)(19.8MiB/10006msec) 00:31:05.592 slat (nsec): min=5639, max=59327, avg=12552.67, stdev=8091.55 00:31:05.592 clat (usec): min=7462, max=36187, avg=31559.67, stdev=2285.15 00:31:05.592 lat (usec): min=7485, max=36195, avg=31572.23, stdev=2284.48 00:31:05.592 clat percentiles (usec): 00:31:05.592 | 1.00th=[22414], 5.00th=[30540], 10.00th=[30802], 20.00th=[31589], 00:31:05.592 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:05.592 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:05.592 | 99.00th=[33162], 99.50th=[33817], 99.90th=[34341], 99.95th=[35390], 00:31:05.592 | 99.99th=[36439] 00:31:05.592 bw ( KiB/s): min= 1920, max= 2176, per=4.24%, avg=2020.79, stdev=80.63, samples=19 00:31:05.592 iops : min= 480, max= 544, avg=505.16, stdev=20.15, samples=19 00:31:05.592 lat (msec) : 10=0.63%, 20=0.32%, 50=99.05% 00:31:05.592 cpu : usr=94.95%, sys=2.49%, ctx=109, majf=0, minf=34 00:31:05.592 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:05.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.592 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.592 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.592 filename2: (groupid=0, jobs=1): err= 0: pid=2390044: Mon Jul 15 21:46:53 2024 00:31:05.592 read: IOPS=501, BW=2006KiB/s (2054kB/s)(19.6MiB/10020msec) 00:31:05.592 slat (usec): min=5, max=159, avg=16.65, stdev=13.53 00:31:05.592 clat (usec): min=12522, max=42592, avg=31769.12, stdev=1386.61 00:31:05.592 lat (usec): min=12528, max=42610, avg=31785.76, stdev=1386.54 00:31:05.592 clat percentiles (usec): 00:31:05.592 | 1.00th=[30016], 5.00th=[30802], 10.00th=[31065], 20.00th=[31589], 00:31:05.592 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:05.592 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:05.592 | 99.00th=[32900], 99.50th=[33424], 99.90th=[42730], 99.95th=[42730], 00:31:05.592 | 99.99th=[42730] 00:31:05.592 bw ( KiB/s): min= 1920, max= 2048, per=4.20%, avg=2003.20, stdev=62.64, samples=20 00:31:05.592 iops : min= 480, max= 512, avg=500.80, stdev=15.66, samples=20 00:31:05.592 lat (msec) : 20=0.32%, 50=99.68% 00:31:05.592 cpu : usr=99.17%, sys=0.52%, ctx=10, majf=0, minf=18 00:31:05.592 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:05.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.592 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.592 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.592 filename2: (groupid=0, jobs=1): err= 0: pid=2390045: Mon Jul 15 21:46:53 2024 00:31:05.592 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.5MiB/10015msec) 00:31:05.592 slat (usec): min=4, max=106, avg=26.99, stdev=17.63 00:31:05.592 clat (usec): min=12734, max=55164, avg=31922.03, stdev=3008.74 00:31:05.592 lat (usec): min=12741, max=55183, avg=31949.02, stdev=3007.48 00:31:05.592 clat percentiles (usec): 00:31:05.592 | 1.00th=[23987], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:31:05.592 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:05.592 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[33162], 00:31:05.592 | 99.00th=[49021], 99.50th=[50070], 99.90th=[55313], 99.95th=[55313], 00:31:05.592 | 99.99th=[55313] 00:31:05.592 bw ( KiB/s): min= 1872, max= 2048, per=4.16%, avg=1986.00, stdev=63.99, samples=20 00:31:05.592 iops : min= 468, max= 512, avg=496.50, stdev=16.00, samples=20 00:31:05.592 lat (msec) : 20=0.72%, 50=98.88%, 100=0.40% 00:31:05.592 cpu : usr=98.71%, sys=0.95%, ctx=20, majf=0, minf=29 00:31:05.592 IO depths : 1=5.2%, 2=10.5%, 4=23.3%, 8=53.5%, 16=7.5%, 32=0.0%, >=64=0.0% 00:31:05.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.592 complete : 0=0.0%, 4=93.8%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.592 issued rwts: total=4981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.592 filename2: (groupid=0, jobs=1): err= 0: pid=2390046: Mon Jul 15 21:46:53 2024 00:31:05.592 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.5MiB/10014msec) 00:31:05.592 slat (nsec): min=5580, max=94648, avg=21437.20, stdev=15190.57 00:31:05.592 clat (usec): min=14433, max=57481, avg=32016.17, stdev=2378.61 00:31:05.592 lat (usec): min=14450, max=57487, avg=32037.60, stdev=2378.11 00:31:05.592 clat percentiles (usec): 00:31:05.592 | 1.00th=[26346], 5.00th=[30802], 10.00th=[31065], 20.00th=[31589], 00:31:05.592 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:05.592 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32900], 00:31:05.592 | 99.00th=[47449], 99.50th=[48497], 99.90th=[54789], 99.95th=[56886], 00:31:05.592 | 99.99th=[57410] 00:31:05.592 bw ( KiB/s): min= 1888, max= 2048, per=4.17%, avg=1987.50, stdev=55.24, samples=20 00:31:05.592 iops : min= 472, max= 512, avg=496.80, stdev=13.76, samples=20 00:31:05.592 lat (msec) : 20=0.32%, 50=99.52%, 100=0.16% 00:31:05.592 cpu : usr=98.91%, sys=0.69%, ctx=122, majf=0, minf=27 00:31:05.592 IO depths : 1=2.2%, 2=4.6%, 4=10.5%, 8=68.9%, 16=13.8%, 32=0.0%, >=64=0.0% 00:31:05.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.592 complete : 0=0.0%, 4=91.3%, 8=6.2%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.592 issued rwts: total=4980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:05.592 00:31:05.592 Run status group 0 (all jobs): 00:31:05.592 READ: bw=46.6MiB/s (48.8MB/s), 1841KiB/s-2021KiB/s (1885kB/s-2070kB/s), io=467MiB (490MB), run=10004-10032msec 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.592 bdev_null0 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.592 [2024-07-15 21:46:53.834502] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.592 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.592 bdev_null1 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:05.593 { 00:31:05.593 "params": { 00:31:05.593 "name": "Nvme$subsystem", 00:31:05.593 "trtype": "$TEST_TRANSPORT", 00:31:05.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.593 "adrfam": "ipv4", 00:31:05.593 "trsvcid": "$NVMF_PORT", 00:31:05.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.593 "hdgst": ${hdgst:-false}, 00:31:05.593 "ddgst": ${ddgst:-false} 00:31:05.593 }, 00:31:05.593 "method": "bdev_nvme_attach_controller" 00:31:05.593 } 00:31:05.593 EOF 00:31:05.593 )") 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:05.593 { 00:31:05.593 "params": { 00:31:05.593 "name": "Nvme$subsystem", 00:31:05.593 "trtype": "$TEST_TRANSPORT", 00:31:05.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.593 "adrfam": "ipv4", 00:31:05.593 "trsvcid": "$NVMF_PORT", 00:31:05.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.593 "hdgst": ${hdgst:-false}, 00:31:05.593 "ddgst": ${ddgst:-false} 00:31:05.593 }, 00:31:05.593 "method": "bdev_nvme_attach_controller" 00:31:05.593 } 00:31:05.593 EOF 00:31:05.593 )") 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:05.593 "params": { 00:31:05.593 "name": "Nvme0", 00:31:05.593 "trtype": "tcp", 00:31:05.593 "traddr": "10.0.0.2", 00:31:05.593 "adrfam": "ipv4", 00:31:05.593 "trsvcid": "4420", 00:31:05.593 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:05.593 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:05.593 "hdgst": false, 00:31:05.593 "ddgst": false 00:31:05.593 }, 00:31:05.593 "method": "bdev_nvme_attach_controller" 00:31:05.593 },{ 00:31:05.593 "params": { 00:31:05.593 "name": "Nvme1", 00:31:05.593 "trtype": "tcp", 00:31:05.593 "traddr": "10.0.0.2", 00:31:05.593 "adrfam": "ipv4", 00:31:05.593 "trsvcid": "4420", 00:31:05.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:05.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:05.593 "hdgst": false, 00:31:05.593 "ddgst": false 00:31:05.593 }, 00:31:05.593 "method": "bdev_nvme_attach_controller" 00:31:05.593 }' 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:05.593 21:46:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.593 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:05.593 ... 00:31:05.593 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:05.593 ... 00:31:05.593 fio-3.35 00:31:05.593 Starting 4 threads 00:31:05.593 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.883 00:31:10.883 filename0: (groupid=0, jobs=1): err= 0: pid=2392228: Mon Jul 15 21:46:59 2024 00:31:10.883 read: IOPS=2009, BW=15.7MiB/s (16.5MB/s)(78.6MiB/5003msec) 00:31:10.883 slat (nsec): min=5495, max=44285, avg=8267.86, stdev=2604.28 00:31:10.883 clat (usec): min=1476, max=7427, avg=3956.08, stdev=672.40 00:31:10.883 lat (usec): min=1500, max=7435, avg=3964.35, stdev=672.21 00:31:10.883 clat percentiles (usec): 00:31:10.883 | 1.00th=[ 2507], 5.00th=[ 2999], 10.00th=[ 3261], 20.00th=[ 3523], 00:31:10.883 | 30.00th=[ 3687], 40.00th=[ 3785], 50.00th=[ 3884], 60.00th=[ 4015], 00:31:10.883 | 70.00th=[ 4080], 80.00th=[ 4293], 90.00th=[ 4817], 95.00th=[ 5342], 00:31:10.883 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 6783], 99.95th=[ 7046], 00:31:10.884 | 99.99th=[ 7439] 00:31:10.884 bw ( KiB/s): min=15744, max=16512, per=25.67%, avg=16087.11, stdev=258.13, samples=9 00:31:10.884 iops : min= 1968, max= 2064, avg=2010.89, stdev=32.27, samples=9 00:31:10.884 lat (msec) : 2=0.51%, 4=58.59%, 10=40.90% 00:31:10.884 cpu : usr=97.20%, sys=2.52%, ctx=10, majf=0, minf=44 00:31:10.884 IO depths : 1=0.1%, 2=3.4%, 4=68.3%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:10.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.884 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.884 issued rwts: total=10056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.884 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:10.884 filename0: (groupid=0, jobs=1): err= 0: pid=2392229: Mon Jul 15 21:46:59 2024 00:31:10.884 read: IOPS=1884, BW=14.7MiB/s (15.4MB/s)(73.6MiB/5002msec) 00:31:10.884 slat (nsec): min=5414, max=35515, avg=9108.61, stdev=2442.34 00:31:10.884 clat (usec): min=2181, max=7966, avg=4220.28, stdev=680.48 00:31:10.884 lat (usec): min=2192, max=8001, avg=4229.38, stdev=680.48 00:31:10.884 clat percentiles (usec): 00:31:10.884 | 1.00th=[ 2900], 5.00th=[ 3326], 10.00th=[ 3523], 20.00th=[ 3720], 00:31:10.884 | 30.00th=[ 3851], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4178], 00:31:10.884 | 70.00th=[ 4424], 80.00th=[ 4686], 90.00th=[ 5145], 95.00th=[ 5604], 00:31:10.884 | 99.00th=[ 6194], 99.50th=[ 6390], 99.90th=[ 7308], 99.95th=[ 7570], 00:31:10.884 | 99.99th=[ 7963] 00:31:10.884 bw ( KiB/s): min=14832, max=15296, per=24.02%, avg=15055.78, stdev=165.51, samples=9 00:31:10.884 iops : min= 1854, max= 1912, avg=1881.89, stdev=20.69, samples=9 00:31:10.884 lat (msec) : 4=42.05%, 10=57.95% 00:31:10.884 cpu : usr=97.28%, sys=2.40%, ctx=7, majf=0, minf=49 00:31:10.884 IO depths : 1=0.4%, 2=1.7%, 4=70.3%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:10.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.884 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.884 issued rwts: total=9424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.884 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:10.884 filename1: (groupid=0, jobs=1): err= 0: pid=2392230: Mon Jul 15 21:46:59 2024 00:31:10.884 read: IOPS=1854, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5002msec) 00:31:10.884 slat (usec): min=5, max=146, avg= 8.24, stdev= 2.71 00:31:10.884 clat (usec): min=1717, max=47495, avg=4291.47, stdev=1426.84 00:31:10.884 lat (usec): min=1723, max=47527, avg=4299.71, stdev=1427.07 00:31:10.884 clat percentiles (usec): 00:31:10.884 | 1.00th=[ 3064], 5.00th=[ 3392], 10.00th=[ 3589], 20.00th=[ 3752], 00:31:10.884 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4293], 00:31:10.884 | 70.00th=[ 4490], 80.00th=[ 4752], 90.00th=[ 5211], 95.00th=[ 5538], 00:31:10.884 | 99.00th=[ 6259], 99.50th=[ 6390], 99.90th=[ 6783], 99.95th=[47449], 00:31:10.884 | 99.99th=[47449] 00:31:10.884 bw ( KiB/s): min=13258, max=15168, per=23.62%, avg=14801.11, stdev=595.05, samples=9 00:31:10.884 iops : min= 1657, max= 1896, avg=1850.11, stdev=74.46, samples=9 00:31:10.884 lat (msec) : 2=0.03%, 4=37.84%, 10=62.04%, 50=0.09% 00:31:10.884 cpu : usr=97.36%, sys=2.36%, ctx=7, majf=0, minf=52 00:31:10.884 IO depths : 1=0.3%, 2=1.3%, 4=70.2%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:10.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.884 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.884 issued rwts: total=9275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.884 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:10.884 filename1: (groupid=0, jobs=1): err= 0: pid=2392231: Mon Jul 15 21:46:59 2024 00:31:10.884 read: IOPS=2086, BW=16.3MiB/s (17.1MB/s)(81.5MiB/5002msec) 00:31:10.884 slat (nsec): min=5415, max=44354, avg=8562.66, stdev=2515.75 00:31:10.884 clat (usec): min=1339, max=7942, avg=3810.25, stdev=677.78 00:31:10.884 lat (usec): min=1348, max=7976, avg=3818.81, stdev=677.67 00:31:10.884 clat percentiles (usec): 00:31:10.884 | 1.00th=[ 2278], 5.00th=[ 2737], 10.00th=[ 2999], 20.00th=[ 3294], 00:31:10.884 | 30.00th=[ 3523], 40.00th=[ 3687], 50.00th=[ 3785], 60.00th=[ 3949], 00:31:10.884 | 70.00th=[ 4015], 80.00th=[ 4146], 90.00th=[ 4621], 95.00th=[ 5080], 00:31:10.884 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6456], 99.95th=[ 6652], 00:31:10.884 | 99.99th=[ 7832] 00:31:10.884 bw ( KiB/s): min=16288, max=17376, per=26.63%, avg=16691.56, stdev=360.87, samples=9 00:31:10.884 iops : min= 2036, max= 2172, avg=2086.44, stdev=45.11, samples=9 00:31:10.884 lat (msec) : 2=0.29%, 4=66.01%, 10=33.71% 00:31:10.884 cpu : usr=98.06%, sys=1.64%, ctx=7, majf=0, minf=50 00:31:10.884 IO depths : 1=0.4%, 2=2.2%, 4=69.6%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:10.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.884 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.884 issued rwts: total=10437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.884 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:10.884 00:31:10.884 Run status group 0 (all jobs): 00:31:10.884 READ: bw=61.2MiB/s (64.2MB/s), 14.5MiB/s-16.3MiB/s (15.2MB/s-17.1MB/s), io=306MiB (321MB), run=5002-5003msec 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.884 00:31:10.884 real 0m24.077s 00:31:10.884 user 5m14.548s 00:31:10.884 sys 0m4.322s 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:10.884 21:47:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.884 ************************************ 00:31:10.884 END TEST fio_dif_rand_params 00:31:10.884 ************************************ 00:31:10.884 21:47:00 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:10.884 21:47:00 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:10.884 21:47:00 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:10.884 21:47:00 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:10.884 21:47:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:10.884 ************************************ 00:31:10.884 START TEST fio_dif_digest 00:31:10.884 ************************************ 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:10.884 bdev_null0 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:10.884 [2024-07-15 21:47:00.196390] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:10.884 21:47:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:10.885 { 00:31:10.885 "params": { 00:31:10.885 "name": "Nvme$subsystem", 00:31:10.885 "trtype": "$TEST_TRANSPORT", 00:31:10.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.885 "adrfam": "ipv4", 00:31:10.885 "trsvcid": "$NVMF_PORT", 00:31:10.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.885 "hdgst": ${hdgst:-false}, 00:31:10.885 "ddgst": ${ddgst:-false} 00:31:10.885 }, 00:31:10.885 "method": "bdev_nvme_attach_controller" 00:31:10.885 } 00:31:10.885 EOF 00:31:10.885 )") 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:10.885 "params": { 00:31:10.885 "name": "Nvme0", 00:31:10.885 "trtype": "tcp", 00:31:10.885 "traddr": "10.0.0.2", 00:31:10.885 "adrfam": "ipv4", 00:31:10.885 "trsvcid": "4420", 00:31:10.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.885 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:10.885 "hdgst": true, 00:31:10.885 "ddgst": true 00:31:10.885 }, 00:31:10.885 "method": "bdev_nvme_attach_controller" 00:31:10.885 }' 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:10.885 21:47:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.885 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:10.885 ... 00:31:10.885 fio-3.35 00:31:10.885 Starting 3 threads 00:31:10.885 EAL: No free 2048 kB hugepages reported on node 1 00:31:23.122 00:31:23.122 filename0: (groupid=0, jobs=1): err= 0: pid=2393742: Mon Jul 15 21:47:11 2024 00:31:23.123 read: IOPS=141, BW=17.7MiB/s (18.6MB/s)(178MiB/10032msec) 00:31:23.123 slat (nsec): min=5647, max=50231, avg=7788.56, stdev=2548.52 00:31:23.123 clat (usec): min=8432, max=98935, avg=21181.26, stdev=16171.89 00:31:23.123 lat (usec): min=8438, max=98942, avg=21189.05, stdev=16171.92 00:31:23.123 clat percentiles (usec): 00:31:23.123 | 1.00th=[10290], 5.00th=[11207], 10.00th=[11863], 20.00th=[12911], 00:31:23.123 | 30.00th=[13829], 40.00th=[14484], 50.00th=[15008], 60.00th=[15664], 00:31:23.123 | 70.00th=[16319], 80.00th=[17433], 90.00th=[54789], 95.00th=[56361], 00:31:23.123 | 99.00th=[58983], 99.50th=[95945], 99.90th=[98042], 99.95th=[99091], 00:31:23.123 | 99.99th=[99091] 00:31:23.123 bw ( KiB/s): min=13312, max=23808, per=24.73%, avg=18137.60, stdev=2799.75, samples=20 00:31:23.123 iops : min= 104, max= 186, avg=141.70, stdev=21.87, samples=20 00:31:23.123 lat (msec) : 10=0.92%, 20=83.38%, 50=0.07%, 100=15.63% 00:31:23.123 cpu : usr=96.64%, sys=3.12%, ctx=22, majf=0, minf=101 00:31:23.123 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:23.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.123 issued rwts: total=1420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.123 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:23.123 filename0: (groupid=0, jobs=1): err= 0: pid=2393743: Mon Jul 15 21:47:11 2024 00:31:23.123 read: IOPS=296, BW=37.0MiB/s (38.8MB/s)(372MiB/10046msec) 00:31:23.123 slat (nsec): min=5653, max=54193, avg=6921.82, stdev=1516.08 00:31:23.123 clat (usec): min=5056, max=50895, avg=10099.84, stdev=2601.11 00:31:23.123 lat (usec): min=5065, max=50904, avg=10106.77, stdev=2601.28 00:31:23.123 clat percentiles (usec): 00:31:23.123 | 1.00th=[ 5997], 5.00th=[ 6783], 10.00th=[ 7242], 20.00th=[ 8094], 00:31:23.123 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10683], 00:31:23.123 | 70.00th=[11469], 80.00th=[12125], 90.00th=[12780], 95.00th=[13304], 00:31:23.123 | 99.00th=[14222], 99.50th=[14746], 99.90th=[49021], 99.95th=[51119], 00:31:23.123 | 99.99th=[51119] 00:31:23.123 bw ( KiB/s): min=34560, max=40448, per=51.93%, avg=38080.00, stdev=1654.65, samples=20 00:31:23.123 iops : min= 270, max= 316, avg=297.50, stdev=12.93, samples=20 00:31:23.123 lat (msec) : 10=51.39%, 20=48.44%, 50=0.10%, 100=0.07% 00:31:23.123 cpu : usr=96.01%, sys=3.71%, ctx=22, majf=0, minf=221 00:31:23.123 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:23.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.123 issued rwts: total=2977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.123 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:23.123 filename0: (groupid=0, jobs=1): err= 0: pid=2393744: Mon Jul 15 21:47:11 2024 00:31:23.123 read: IOPS=135, BW=16.9MiB/s (17.8MB/s)(170MiB/10025msec) 00:31:23.123 slat (nsec): min=5673, max=39371, avg=7269.46, stdev=1747.37 00:31:23.123 clat (usec): min=9439, max=99149, avg=22134.40, stdev=17312.12 00:31:23.123 lat (usec): min=9446, max=99160, avg=22141.66, stdev=17312.14 00:31:23.123 clat percentiles (usec): 00:31:23.123 | 1.00th=[10552], 5.00th=[11469], 10.00th=[12256], 20.00th=[13435], 00:31:23.123 | 30.00th=[14091], 40.00th=[14746], 50.00th=[15401], 60.00th=[15926], 00:31:23.123 | 70.00th=[16581], 80.00th=[17695], 90.00th=[55313], 95.00th=[56886], 00:31:23.123 | 99.00th=[95945], 99.50th=[96994], 99.90th=[98042], 99.95th=[99091], 00:31:23.123 | 99.99th=[99091] 00:31:23.123 bw ( KiB/s): min=13312, max=22016, per=23.65%, avg=17344.00, stdev=2343.15, samples=20 00:31:23.123 iops : min= 104, max= 172, avg=135.50, stdev=18.31, samples=20 00:31:23.123 lat (msec) : 10=0.44%, 20=82.77%, 50=0.07%, 100=16.72% 00:31:23.123 cpu : usr=96.51%, sys=3.25%, ctx=17, majf=0, minf=204 00:31:23.123 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:23.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.123 issued rwts: total=1358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.123 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:23.123 00:31:23.123 Run status group 0 (all jobs): 00:31:23.123 READ: bw=71.6MiB/s (75.1MB/s), 16.9MiB/s-37.0MiB/s (17.8MB/s-38.8MB/s), io=719MiB (754MB), run=10025-10046msec 00:31:23.123 21:47:11 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:23.123 21:47:11 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:23.123 21:47:11 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:23.123 21:47:11 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:23.123 21:47:11 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:23.123 21:47:11 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:23.123 21:47:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.123 21:47:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:23.123 21:47:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.123 21:47:11 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:23.123 21:47:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.123 21:47:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:23.123 21:47:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.123 00:31:23.123 real 0m11.200s 00:31:23.123 user 0m44.761s 00:31:23.123 sys 0m1.284s 00:31:23.123 21:47:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:23.123 21:47:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:23.123 ************************************ 00:31:23.123 END TEST fio_dif_digest 00:31:23.123 ************************************ 00:31:23.123 21:47:11 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:23.123 21:47:11 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:23.123 21:47:11 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:23.123 21:47:11 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:23.123 21:47:11 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:23.123 21:47:11 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:23.123 21:47:11 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:23.123 21:47:11 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:23.123 21:47:11 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:23.123 rmmod nvme_tcp 00:31:23.123 rmmod nvme_fabrics 00:31:23.123 rmmod nvme_keyring 00:31:23.123 21:47:11 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:23.123 21:47:11 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:23.123 21:47:11 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:23.123 21:47:11 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2383271 ']' 00:31:23.123 21:47:11 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2383271 00:31:23.123 21:47:11 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2383271 ']' 00:31:23.123 21:47:11 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2383271 00:31:23.123 21:47:11 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:31:23.123 21:47:11 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:23.123 21:47:11 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2383271 00:31:23.123 21:47:11 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:23.123 21:47:11 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:23.123 21:47:11 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2383271' 00:31:23.123 killing process with pid 2383271 00:31:23.123 21:47:11 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2383271 00:31:23.123 21:47:11 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2383271 00:31:23.123 21:47:11 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:23.123 21:47:11 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:25.037 Waiting for block devices as requested 00:31:25.037 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:25.037 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:25.298 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:25.298 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:25.298 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:25.558 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:25.558 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:25.558 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:25.558 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:25.819 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:25.819 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:26.080 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:26.080 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:26.080 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:26.080 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:26.342 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:26.342 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:26.602 21:47:16 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:26.602 21:47:16 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:26.602 21:47:16 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:26.602 21:47:16 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:26.602 21:47:16 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.602 21:47:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:26.602 21:47:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.516 21:47:18 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:28.516 00:31:28.516 real 1m17.081s 00:31:28.516 user 8m3.162s 00:31:28.516 sys 0m19.733s 00:31:28.516 21:47:18 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:28.516 21:47:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:28.516 ************************************ 00:31:28.516 END TEST nvmf_dif 00:31:28.516 ************************************ 00:31:28.777 21:47:18 -- common/autotest_common.sh@1142 -- # return 0 00:31:28.777 21:47:18 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:28.777 21:47:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:28.777 21:47:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:28.777 21:47:18 -- common/autotest_common.sh@10 -- # set +x 00:31:28.777 ************************************ 00:31:28.777 START TEST nvmf_abort_qd_sizes 00:31:28.777 ************************************ 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:28.777 * Looking for test storage... 00:31:28.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:28.777 21:47:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:35.371 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:35.371 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:35.372 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:35.372 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:35.372 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.372 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.634 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.634 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.634 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:35.634 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.634 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.634 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.634 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:35.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:31:35.634 00:31:35.634 --- 10.0.0.2 ping statistics --- 00:31:35.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.634 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:31:35.634 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:31:35.634 00:31:35.634 --- 10.0.0.1 ping statistics --- 00:31:35.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.634 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:31:35.634 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.634 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:35.634 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:35.634 21:47:25 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:38.182 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:38.444 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:38.444 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:38.444 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:38.444 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:38.444 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:38.444 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:38.444 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:38.444 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:38.444 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:38.444 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:38.444 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:38.444 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:38.444 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:38.444 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:38.444 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:38.444 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2402840 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2402840 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2402840 ']' 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:39.017 21:47:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:39.017 [2024-07-15 21:47:28.634108] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:31:39.017 [2024-07-15 21:47:28.634180] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:39.017 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.017 [2024-07-15 21:47:28.698683] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:39.017 [2024-07-15 21:47:28.764985] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:39.017 [2024-07-15 21:47:28.765020] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:39.017 [2024-07-15 21:47:28.765028] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:39.017 [2024-07-15 21:47:28.765035] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:39.017 [2024-07-15 21:47:28.765040] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:39.017 [2024-07-15 21:47:28.765189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.017 [2024-07-15 21:47:28.765208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:39.017 [2024-07-15 21:47:28.765758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:39.017 [2024-07-15 21:47:28.765759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:39.960 21:47:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:39.960 ************************************ 00:31:39.960 START TEST spdk_target_abort 00:31:39.960 ************************************ 00:31:39.960 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:31:39.960 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:39.960 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:39.960 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.960 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:40.221 spdk_targetn1 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:40.221 [2024-07-15 21:47:29.789124] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:40.221 [2024-07-15 21:47:29.829360] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:40.221 21:47:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:40.221 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.221 [2024-07-15 21:47:30.010412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:440 len:8 PRP1 0x2000078be000 PRP2 0x0 00:31:40.221 [2024-07-15 21:47:30.010444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0038 p:1 m:0 dnr:0 00:31:40.221 [2024-07-15 21:47:30.011155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:472 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:31:40.221 [2024-07-15 21:47:30.011173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:003c p:1 m:0 dnr:0 00:31:40.221 [2024-07-15 21:47:30.011473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:480 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:31:40.221 [2024-07-15 21:47:30.011487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:003e p:1 m:0 dnr:0 00:31:40.221 [2024-07-15 21:47:30.016531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:544 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:31:40.221 [2024-07-15 21:47:30.016553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0045 p:1 m:0 dnr:0 00:31:40.221 [2024-07-15 21:47:30.026066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:816 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:31:40.221 [2024-07-15 21:47:30.026090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0067 p:1 m:0 dnr:0 00:31:43.515 Initializing NVMe Controllers 00:31:43.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:43.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:43.515 Initialization complete. Launching workers. 00:31:43.515 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9736, failed: 5 00:31:43.515 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3228, failed to submit 6513 00:31:43.515 success 719, unsuccess 2509, failed 0 00:31:43.515 21:47:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:43.515 21:47:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:43.515 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.516 [2024-07-15 21:47:33.315114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:2408 len:8 PRP1 0x200007c44000 PRP2 0x0 00:31:43.516 [2024-07-15 21:47:33.315164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:45.470 [2024-07-15 21:47:34.817173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:37000 len:8 PRP1 0x200007c44000 PRP2 0x0 00:31:45.470 [2024-07-15 21:47:34.817210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:31:46.852 Initializing NVMe Controllers 00:31:46.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:46.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:46.853 Initialization complete. Launching workers. 00:31:46.853 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8618, failed: 2 00:31:46.853 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1221, failed to submit 7399 00:31:46.853 success 366, unsuccess 855, failed 0 00:31:46.853 21:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:46.853 21:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:46.853 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.152 Initializing NVMe Controllers 00:31:50.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:50.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:50.152 Initialization complete. Launching workers. 00:31:50.152 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41603, failed: 0 00:31:50.152 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2718, failed to submit 38885 00:31:50.152 success 615, unsuccess 2103, failed 0 00:31:50.152 21:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:50.152 21:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.152 21:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:50.152 21:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.152 21:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:50.152 21:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.152 21:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2402840 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2402840 ']' 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2402840 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2402840 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2402840' 00:31:52.062 killing process with pid 2402840 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2402840 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2402840 00:31:52.062 00:31:52.062 real 0m12.186s 00:31:52.062 user 0m49.259s 00:31:52.062 sys 0m2.045s 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:52.062 ************************************ 00:31:52.062 END TEST spdk_target_abort 00:31:52.062 ************************************ 00:31:52.062 21:47:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:31:52.062 21:47:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:52.062 21:47:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:52.062 21:47:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:52.062 21:47:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:52.062 ************************************ 00:31:52.062 START TEST kernel_target_abort 00:31:52.062 ************************************ 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:52.062 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:52.063 21:47:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:55.384 Waiting for block devices as requested 00:31:55.384 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:55.384 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:55.384 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:55.645 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:55.645 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:55.645 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:55.905 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:55.905 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:55.905 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:56.166 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:56.166 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:56.426 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:56.426 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:56.426 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:56.426 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:56.686 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:56.686 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:56.947 No valid GPT data, bailing 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:56.947 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:31:57.207 00:31:57.207 Discovery Log Number of Records 2, Generation counter 2 00:31:57.207 =====Discovery Log Entry 0====== 00:31:57.207 trtype: tcp 00:31:57.207 adrfam: ipv4 00:31:57.207 subtype: current discovery subsystem 00:31:57.207 treq: not specified, sq flow control disable supported 00:31:57.207 portid: 1 00:31:57.207 trsvcid: 4420 00:31:57.207 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:57.207 traddr: 10.0.0.1 00:31:57.207 eflags: none 00:31:57.207 sectype: none 00:31:57.207 =====Discovery Log Entry 1====== 00:31:57.207 trtype: tcp 00:31:57.207 adrfam: ipv4 00:31:57.207 subtype: nvme subsystem 00:31:57.207 treq: not specified, sq flow control disable supported 00:31:57.207 portid: 1 00:31:57.207 trsvcid: 4420 00:31:57.207 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:57.207 traddr: 10.0.0.1 00:31:57.207 eflags: none 00:31:57.207 sectype: none 00:31:57.207 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:57.207 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:57.207 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:57.207 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:57.207 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:57.207 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:57.207 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:57.207 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:57.207 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:57.207 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:57.207 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:57.207 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:57.207 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:57.208 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:57.208 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:57.208 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:57.208 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:57.208 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:57.208 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:57.208 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:57.208 21:47:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:57.208 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.506 Initializing NVMe Controllers 00:32:00.506 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:00.506 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:00.506 Initialization complete. Launching workers. 00:32:00.506 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 44982, failed: 0 00:32:00.506 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 44982, failed to submit 0 00:32:00.506 success 0, unsuccess 44982, failed 0 00:32:00.506 21:47:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:00.506 21:47:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:00.506 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.822 Initializing NVMe Controllers 00:32:03.822 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:03.822 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:03.822 Initialization complete. Launching workers. 00:32:03.822 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85493, failed: 0 00:32:03.822 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21526, failed to submit 63967 00:32:03.822 success 0, unsuccess 21526, failed 0 00:32:03.822 21:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:03.822 21:47:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:03.822 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.367 Initializing NVMe Controllers 00:32:06.367 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:06.367 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:06.367 Initialization complete. Launching workers. 00:32:06.367 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82834, failed: 0 00:32:06.367 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20686, failed to submit 62148 00:32:06.367 success 0, unsuccess 20686, failed 0 00:32:06.367 21:47:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:06.367 21:47:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:06.367 21:47:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:06.367 21:47:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:06.367 21:47:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:06.367 21:47:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:06.367 21:47:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:06.367 21:47:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:06.367 21:47:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:06.367 21:47:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:09.726 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:09.726 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:09.726 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:09.726 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:09.726 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:09.726 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:09.726 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:09.726 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:09.726 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:09.726 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:09.726 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:09.726 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:09.726 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:09.726 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:09.726 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:09.726 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:11.639 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:11.899 00:32:11.899 real 0m19.778s 00:32:11.899 user 0m7.644s 00:32:11.899 sys 0m6.327s 00:32:11.899 21:48:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:11.899 21:48:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:11.899 ************************************ 00:32:11.899 END TEST kernel_target_abort 00:32:11.899 ************************************ 00:32:11.899 21:48:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:11.899 21:48:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:11.899 21:48:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:11.899 21:48:01 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:11.899 21:48:01 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:11.899 21:48:01 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:11.899 21:48:01 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:11.900 21:48:01 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:11.900 21:48:01 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:11.900 rmmod nvme_tcp 00:32:11.900 rmmod nvme_fabrics 00:32:11.900 rmmod nvme_keyring 00:32:11.900 21:48:01 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:11.900 21:48:01 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:11.900 21:48:01 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:11.900 21:48:01 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2402840 ']' 00:32:11.900 21:48:01 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2402840 00:32:11.900 21:48:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2402840 ']' 00:32:11.900 21:48:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2402840 00:32:11.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2402840) - No such process 00:32:11.900 21:48:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2402840 is not found' 00:32:11.900 Process with pid 2402840 is not found 00:32:11.900 21:48:01 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:11.900 21:48:01 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:15.198 Waiting for block devices as requested 00:32:15.198 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:15.458 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:15.458 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:15.458 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:15.719 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:15.719 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:15.719 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:15.980 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:15.980 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:16.241 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:16.241 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:16.241 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:16.241 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:16.502 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:16.502 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:16.502 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:16.502 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:16.763 21:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:16.763 21:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:16.763 21:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:16.763 21:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:16.763 21:48:06 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.763 21:48:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:16.763 21:48:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.309 21:48:08 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:19.309 00:32:19.309 real 0m50.224s 00:32:19.309 user 1m1.899s 00:32:19.309 sys 0m18.140s 00:32:19.309 21:48:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:19.309 21:48:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:19.309 ************************************ 00:32:19.309 END TEST nvmf_abort_qd_sizes 00:32:19.309 ************************************ 00:32:19.309 21:48:08 -- common/autotest_common.sh@1142 -- # return 0 00:32:19.309 21:48:08 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:19.309 21:48:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:19.309 21:48:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:19.309 21:48:08 -- common/autotest_common.sh@10 -- # set +x 00:32:19.309 ************************************ 00:32:19.309 START TEST keyring_file 00:32:19.309 ************************************ 00:32:19.309 21:48:08 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:19.309 * Looking for test storage... 00:32:19.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:19.309 21:48:08 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:19.309 21:48:08 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:19.309 21:48:08 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:19.309 21:48:08 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:19.309 21:48:08 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.309 21:48:08 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.309 21:48:08 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.309 21:48:08 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:19.309 21:48:08 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:19.309 21:48:08 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:19.309 21:48:08 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:19.309 21:48:08 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:19.309 21:48:08 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:19.309 21:48:08 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:19.309 21:48:08 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ishjsVIzhJ 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ishjsVIzhJ 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ishjsVIzhJ 00:32:19.309 21:48:08 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ishjsVIzhJ 00:32:19.309 21:48:08 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LLyuZ5TrNY 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:19.309 21:48:08 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LLyuZ5TrNY 00:32:19.309 21:48:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LLyuZ5TrNY 00:32:19.309 21:48:08 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.LLyuZ5TrNY 00:32:19.309 21:48:08 keyring_file -- keyring/file.sh@30 -- # tgtpid=2413585 00:32:19.309 21:48:08 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2413585 00:32:19.309 21:48:08 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:19.309 21:48:08 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2413585 ']' 00:32:19.309 21:48:08 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.310 21:48:08 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:19.310 21:48:08 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.310 21:48:08 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:19.310 21:48:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:19.310 [2024-07-15 21:48:09.011033] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:32:19.310 [2024-07-15 21:48:09.011106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413585 ] 00:32:19.310 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.310 [2024-07-15 21:48:09.074407] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.570 [2024-07-15 21:48:09.149220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.140 21:48:09 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:20.140 21:48:09 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:20.140 21:48:09 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:20.140 21:48:09 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.140 21:48:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:20.140 [2024-07-15 21:48:09.776980] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:20.140 null0 00:32:20.140 [2024-07-15 21:48:09.809031] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:20.140 [2024-07-15 21:48:09.809292] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:20.141 [2024-07-15 21:48:09.817045] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.141 21:48:09 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:20.141 [2024-07-15 21:48:09.833077] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:20.141 request: 00:32:20.141 { 00:32:20.141 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:20.141 "secure_channel": false, 00:32:20.141 "listen_address": { 00:32:20.141 "trtype": "tcp", 00:32:20.141 "traddr": "127.0.0.1", 00:32:20.141 "trsvcid": "4420" 00:32:20.141 }, 00:32:20.141 "method": "nvmf_subsystem_add_listener", 00:32:20.141 "req_id": 1 00:32:20.141 } 00:32:20.141 Got JSON-RPC error response 00:32:20.141 response: 00:32:20.141 { 00:32:20.141 "code": -32602, 00:32:20.141 "message": "Invalid parameters" 00:32:20.141 } 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:20.141 21:48:09 keyring_file -- keyring/file.sh@46 -- # bperfpid=2413789 00:32:20.141 21:48:09 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2413789 /var/tmp/bperf.sock 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2413789 ']' 00:32:20.141 21:48:09 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:20.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:20.141 21:48:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:20.141 [2024-07-15 21:48:09.887741] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:32:20.141 [2024-07-15 21:48:09.887786] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413789 ] 00:32:20.141 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.402 [2024-07-15 21:48:09.962100] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.402 [2024-07-15 21:48:10.029703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.973 21:48:10 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:20.973 21:48:10 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:20.973 21:48:10 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ishjsVIzhJ 00:32:20.973 21:48:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ishjsVIzhJ 00:32:21.233 21:48:10 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LLyuZ5TrNY 00:32:21.233 21:48:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LLyuZ5TrNY 00:32:21.233 21:48:10 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:21.233 21:48:10 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:21.233 21:48:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:21.233 21:48:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.233 21:48:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.493 21:48:11 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ishjsVIzhJ == \/\t\m\p\/\t\m\p\.\i\s\h\j\s\V\I\z\h\J ]] 00:32:21.493 21:48:11 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:21.493 21:48:11 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:21.493 21:48:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:21.493 21:48:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.493 21:48:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.493 21:48:11 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.LLyuZ5TrNY == \/\t\m\p\/\t\m\p\.\L\L\y\u\Z\5\T\r\N\Y ]] 00:32:21.493 21:48:11 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:21.493 21:48:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:21.493 21:48:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:21.493 21:48:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.493 21:48:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.493 21:48:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:21.753 21:48:11 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:21.753 21:48:11 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:21.753 21:48:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:21.753 21:48:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:21.753 21:48:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:21.753 21:48:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.753 21:48:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.753 21:48:11 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:21.753 21:48:11 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:21.753 21:48:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:22.013 [2024-07-15 21:48:11.696540] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:22.013 nvme0n1 00:32:22.013 21:48:11 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:22.013 21:48:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:22.013 21:48:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.013 21:48:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.013 21:48:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:22.013 21:48:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.273 21:48:11 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:22.273 21:48:11 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:22.273 21:48:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:22.273 21:48:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.273 21:48:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.273 21:48:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:22.273 21:48:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.533 21:48:12 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:22.533 21:48:12 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:22.533 Running I/O for 1 seconds... 00:32:23.471 00:32:23.472 Latency(us) 00:32:23.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.472 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:23.472 nvme0n1 : 1.02 6876.66 26.86 0.00 0.00 18444.05 7263.57 22391.47 00:32:23.472 =================================================================================================================== 00:32:23.472 Total : 6876.66 26.86 0.00 0.00 18444.05 7263.57 22391.47 00:32:23.472 0 00:32:23.472 21:48:13 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:23.472 21:48:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:23.731 21:48:13 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:23.731 21:48:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:23.731 21:48:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:23.731 21:48:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:23.731 21:48:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.731 21:48:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:23.992 21:48:13 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:23.992 21:48:13 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:23.992 21:48:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:23.992 21:48:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:23.992 21:48:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:23.992 21:48:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.992 21:48:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:23.992 21:48:13 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:23.992 21:48:13 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:23.992 21:48:13 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:23.992 21:48:13 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:23.992 21:48:13 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:23.992 21:48:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:23.992 21:48:13 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:23.992 21:48:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:23.992 21:48:13 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:23.992 21:48:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:24.254 [2024-07-15 21:48:13.869851] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:24.254 [2024-07-15 21:48:13.870092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bc890 (107): Transport endpoint is not connected 00:32:24.254 [2024-07-15 21:48:13.871088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bc890 (9): Bad file descriptor 00:32:24.254 [2024-07-15 21:48:13.872089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:24.254 [2024-07-15 21:48:13.872097] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:24.254 [2024-07-15 21:48:13.872102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:24.254 request: 00:32:24.254 { 00:32:24.254 "name": "nvme0", 00:32:24.254 "trtype": "tcp", 00:32:24.254 "traddr": "127.0.0.1", 00:32:24.254 "adrfam": "ipv4", 00:32:24.254 "trsvcid": "4420", 00:32:24.254 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:24.254 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:24.254 "prchk_reftag": false, 00:32:24.254 "prchk_guard": false, 00:32:24.254 "hdgst": false, 00:32:24.254 "ddgst": false, 00:32:24.254 "psk": "key1", 00:32:24.254 "method": "bdev_nvme_attach_controller", 00:32:24.254 "req_id": 1 00:32:24.254 } 00:32:24.254 Got JSON-RPC error response 00:32:24.254 response: 00:32:24.254 { 00:32:24.254 "code": -5, 00:32:24.254 "message": "Input/output error" 00:32:24.254 } 00:32:24.254 21:48:13 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:24.254 21:48:13 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:24.254 21:48:13 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:24.254 21:48:13 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:24.254 21:48:13 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:24.254 21:48:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:24.254 21:48:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:24.254 21:48:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:24.254 21:48:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:24.254 21:48:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.536 21:48:14 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:24.536 21:48:14 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:24.536 21:48:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:24.536 21:48:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:24.536 21:48:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:24.536 21:48:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.536 21:48:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:24.536 21:48:14 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:24.536 21:48:14 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:24.536 21:48:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:24.796 21:48:14 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:24.796 21:48:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:24.796 21:48:14 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:24.796 21:48:14 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:24.796 21:48:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:25.122 21:48:14 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:25.122 21:48:14 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ishjsVIzhJ 00:32:25.122 21:48:14 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ishjsVIzhJ 00:32:25.122 21:48:14 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:25.122 21:48:14 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ishjsVIzhJ 00:32:25.122 21:48:14 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:25.122 21:48:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:25.122 21:48:14 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:25.122 21:48:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:25.122 21:48:14 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ishjsVIzhJ 00:32:25.122 21:48:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ishjsVIzhJ 00:32:25.122 [2024-07-15 21:48:14.871942] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ishjsVIzhJ': 0100660 00:32:25.122 [2024-07-15 21:48:14.871961] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:25.122 request: 00:32:25.122 { 00:32:25.122 "name": "key0", 00:32:25.122 "path": "/tmp/tmp.ishjsVIzhJ", 00:32:25.122 "method": "keyring_file_add_key", 00:32:25.122 "req_id": 1 00:32:25.122 } 00:32:25.122 Got JSON-RPC error response 00:32:25.122 response: 00:32:25.122 { 00:32:25.122 "code": -1, 00:32:25.122 "message": "Operation not permitted" 00:32:25.122 } 00:32:25.122 21:48:14 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:25.122 21:48:14 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:25.122 21:48:14 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:25.122 21:48:14 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:25.122 21:48:14 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ishjsVIzhJ 00:32:25.122 21:48:14 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ishjsVIzhJ 00:32:25.122 21:48:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ishjsVIzhJ 00:32:25.382 21:48:15 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ishjsVIzhJ 00:32:25.382 21:48:15 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:25.382 21:48:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:25.382 21:48:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:25.382 21:48:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:25.382 21:48:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:25.382 21:48:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:25.642 21:48:15 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:25.642 21:48:15 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:25.642 21:48:15 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:25.642 21:48:15 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:25.642 21:48:15 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:25.642 21:48:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:25.642 21:48:15 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:25.642 21:48:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:25.642 21:48:15 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:25.642 21:48:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:25.642 [2024-07-15 21:48:15.349155] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ishjsVIzhJ': No such file or directory 00:32:25.642 [2024-07-15 21:48:15.349168] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:25.642 [2024-07-15 21:48:15.349184] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:25.642 [2024-07-15 21:48:15.349188] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:25.642 [2024-07-15 21:48:15.349193] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:25.642 request: 00:32:25.642 { 00:32:25.642 "name": "nvme0", 00:32:25.642 "trtype": "tcp", 00:32:25.642 "traddr": "127.0.0.1", 00:32:25.642 "adrfam": "ipv4", 00:32:25.642 "trsvcid": "4420", 00:32:25.642 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:25.643 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:25.643 "prchk_reftag": false, 00:32:25.643 "prchk_guard": false, 00:32:25.643 "hdgst": false, 00:32:25.643 "ddgst": false, 00:32:25.643 "psk": "key0", 00:32:25.643 "method": "bdev_nvme_attach_controller", 00:32:25.643 "req_id": 1 00:32:25.643 } 00:32:25.643 Got JSON-RPC error response 00:32:25.643 response: 00:32:25.643 { 00:32:25.643 "code": -19, 00:32:25.643 "message": "No such device" 00:32:25.643 } 00:32:25.643 21:48:15 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:25.643 21:48:15 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:25.643 21:48:15 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:25.643 21:48:15 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:25.643 21:48:15 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:25.643 21:48:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:25.904 21:48:15 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:25.904 21:48:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:25.904 21:48:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:25.904 21:48:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:25.904 21:48:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:25.904 21:48:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:25.904 21:48:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0FOZjE9aaV 00:32:25.904 21:48:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:25.904 21:48:15 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:25.904 21:48:15 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:25.904 21:48:15 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:25.904 21:48:15 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:25.904 21:48:15 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:25.904 21:48:15 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:25.904 21:48:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0FOZjE9aaV 00:32:25.904 21:48:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0FOZjE9aaV 00:32:25.904 21:48:15 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.0FOZjE9aaV 00:32:25.904 21:48:15 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0FOZjE9aaV 00:32:25.904 21:48:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0FOZjE9aaV 00:32:26.164 21:48:15 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:26.165 21:48:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:26.165 nvme0n1 00:32:26.165 21:48:15 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:26.165 21:48:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:26.165 21:48:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:26.165 21:48:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:26.165 21:48:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:26.165 21:48:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:26.423 21:48:16 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:26.423 21:48:16 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:26.423 21:48:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:26.683 21:48:16 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:26.683 21:48:16 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:26.683 21:48:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:26.683 21:48:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:26.683 21:48:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:26.683 21:48:16 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:26.683 21:48:16 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:26.683 21:48:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:26.683 21:48:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:26.683 21:48:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:26.683 21:48:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:26.683 21:48:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:26.942 21:48:16 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:26.942 21:48:16 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:26.942 21:48:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:26.942 21:48:16 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:26.942 21:48:16 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:26.942 21:48:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.202 21:48:16 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:27.202 21:48:16 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0FOZjE9aaV 00:32:27.203 21:48:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0FOZjE9aaV 00:32:27.462 21:48:17 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LLyuZ5TrNY 00:32:27.462 21:48:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LLyuZ5TrNY 00:32:27.462 21:48:17 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:27.462 21:48:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:27.723 nvme0n1 00:32:27.723 21:48:17 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:27.723 21:48:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:27.983 21:48:17 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:27.983 "subsystems": [ 00:32:27.983 { 00:32:27.983 "subsystem": "keyring", 00:32:27.983 "config": [ 00:32:27.983 { 00:32:27.983 "method": "keyring_file_add_key", 00:32:27.983 "params": { 00:32:27.983 "name": "key0", 00:32:27.983 "path": "/tmp/tmp.0FOZjE9aaV" 00:32:27.983 } 00:32:27.983 }, 00:32:27.983 { 00:32:27.983 "method": "keyring_file_add_key", 00:32:27.983 "params": { 00:32:27.983 "name": "key1", 00:32:27.983 "path": "/tmp/tmp.LLyuZ5TrNY" 00:32:27.983 } 00:32:27.983 } 00:32:27.983 ] 00:32:27.983 }, 00:32:27.983 { 00:32:27.983 "subsystem": "iobuf", 00:32:27.983 "config": [ 00:32:27.983 { 00:32:27.983 "method": "iobuf_set_options", 00:32:27.983 "params": { 00:32:27.983 "small_pool_count": 8192, 00:32:27.983 "large_pool_count": 1024, 00:32:27.983 "small_bufsize": 8192, 00:32:27.983 "large_bufsize": 135168 00:32:27.983 } 00:32:27.983 } 00:32:27.983 ] 00:32:27.983 }, 00:32:27.983 { 00:32:27.983 "subsystem": "sock", 00:32:27.983 "config": [ 00:32:27.983 { 00:32:27.983 "method": "sock_set_default_impl", 00:32:27.983 "params": { 00:32:27.983 "impl_name": "posix" 00:32:27.983 } 00:32:27.983 }, 00:32:27.983 { 00:32:27.983 "method": "sock_impl_set_options", 00:32:27.983 "params": { 00:32:27.983 "impl_name": "ssl", 00:32:27.983 "recv_buf_size": 4096, 00:32:27.983 "send_buf_size": 4096, 00:32:27.983 "enable_recv_pipe": true, 00:32:27.983 "enable_quickack": false, 00:32:27.983 "enable_placement_id": 0, 00:32:27.983 "enable_zerocopy_send_server": true, 00:32:27.983 "enable_zerocopy_send_client": false, 00:32:27.983 "zerocopy_threshold": 0, 00:32:27.983 "tls_version": 0, 00:32:27.983 "enable_ktls": false 00:32:27.983 } 00:32:27.983 }, 00:32:27.983 { 00:32:27.983 "method": "sock_impl_set_options", 00:32:27.983 "params": { 00:32:27.983 "impl_name": "posix", 00:32:27.983 "recv_buf_size": 2097152, 00:32:27.983 "send_buf_size": 2097152, 00:32:27.983 "enable_recv_pipe": true, 00:32:27.983 "enable_quickack": false, 00:32:27.983 "enable_placement_id": 0, 00:32:27.983 "enable_zerocopy_send_server": true, 00:32:27.983 "enable_zerocopy_send_client": false, 00:32:27.983 "zerocopy_threshold": 0, 00:32:27.983 "tls_version": 0, 00:32:27.983 "enable_ktls": false 00:32:27.983 } 00:32:27.983 } 00:32:27.983 ] 00:32:27.983 }, 00:32:27.983 { 00:32:27.983 "subsystem": "vmd", 00:32:27.983 "config": [] 00:32:27.983 }, 00:32:27.983 { 00:32:27.983 "subsystem": "accel", 00:32:27.983 "config": [ 00:32:27.983 { 00:32:27.983 "method": "accel_set_options", 00:32:27.983 "params": { 00:32:27.983 "small_cache_size": 128, 00:32:27.983 "large_cache_size": 16, 00:32:27.983 "task_count": 2048, 00:32:27.983 "sequence_count": 2048, 00:32:27.983 "buf_count": 2048 00:32:27.983 } 00:32:27.983 } 00:32:27.983 ] 00:32:27.983 }, 00:32:27.983 { 00:32:27.983 "subsystem": "bdev", 00:32:27.983 "config": [ 00:32:27.983 { 00:32:27.983 "method": "bdev_set_options", 00:32:27.983 "params": { 00:32:27.983 "bdev_io_pool_size": 65535, 00:32:27.983 "bdev_io_cache_size": 256, 00:32:27.983 "bdev_auto_examine": true, 00:32:27.983 "iobuf_small_cache_size": 128, 00:32:27.983 "iobuf_large_cache_size": 16 00:32:27.983 } 00:32:27.983 }, 00:32:27.983 { 00:32:27.983 "method": "bdev_raid_set_options", 00:32:27.983 "params": { 00:32:27.983 "process_window_size_kb": 1024 00:32:27.983 } 00:32:27.983 }, 00:32:27.983 { 00:32:27.983 "method": "bdev_iscsi_set_options", 00:32:27.983 "params": { 00:32:27.983 "timeout_sec": 30 00:32:27.983 } 00:32:27.983 }, 00:32:27.983 { 00:32:27.983 "method": "bdev_nvme_set_options", 00:32:27.983 "params": { 00:32:27.983 "action_on_timeout": "none", 00:32:27.983 "timeout_us": 0, 00:32:27.983 "timeout_admin_us": 0, 00:32:27.983 "keep_alive_timeout_ms": 10000, 00:32:27.983 "arbitration_burst": 0, 00:32:27.983 "low_priority_weight": 0, 00:32:27.983 "medium_priority_weight": 0, 00:32:27.983 "high_priority_weight": 0, 00:32:27.983 "nvme_adminq_poll_period_us": 10000, 00:32:27.983 "nvme_ioq_poll_period_us": 0, 00:32:27.983 "io_queue_requests": 512, 00:32:27.983 "delay_cmd_submit": true, 00:32:27.983 "transport_retry_count": 4, 00:32:27.983 "bdev_retry_count": 3, 00:32:27.983 "transport_ack_timeout": 0, 00:32:27.983 "ctrlr_loss_timeout_sec": 0, 00:32:27.983 "reconnect_delay_sec": 0, 00:32:27.983 "fast_io_fail_timeout_sec": 0, 00:32:27.983 "disable_auto_failback": false, 00:32:27.983 "generate_uuids": false, 00:32:27.983 "transport_tos": 0, 00:32:27.983 "nvme_error_stat": false, 00:32:27.983 "rdma_srq_size": 0, 00:32:27.983 "io_path_stat": false, 00:32:27.983 "allow_accel_sequence": false, 00:32:27.983 "rdma_max_cq_size": 0, 00:32:27.983 "rdma_cm_event_timeout_ms": 0, 00:32:27.983 "dhchap_digests": [ 00:32:27.983 "sha256", 00:32:27.983 "sha384", 00:32:27.983 "sha512" 00:32:27.983 ], 00:32:27.983 "dhchap_dhgroups": [ 00:32:27.983 "null", 00:32:27.983 "ffdhe2048", 00:32:27.983 "ffdhe3072", 00:32:27.983 "ffdhe4096", 00:32:27.983 "ffdhe6144", 00:32:27.983 "ffdhe8192" 00:32:27.983 ] 00:32:27.983 } 00:32:27.983 }, 00:32:27.983 { 00:32:27.983 "method": "bdev_nvme_attach_controller", 00:32:27.983 "params": { 00:32:27.983 "name": "nvme0", 00:32:27.983 "trtype": "TCP", 00:32:27.983 "adrfam": "IPv4", 00:32:27.983 "traddr": "127.0.0.1", 00:32:27.983 "trsvcid": "4420", 00:32:27.983 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:27.983 "prchk_reftag": false, 00:32:27.983 "prchk_guard": false, 00:32:27.983 "ctrlr_loss_timeout_sec": 0, 00:32:27.983 "reconnect_delay_sec": 0, 00:32:27.983 "fast_io_fail_timeout_sec": 0, 00:32:27.983 "psk": "key0", 00:32:27.983 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:27.983 "hdgst": false, 00:32:27.983 "ddgst": false 00:32:27.983 } 00:32:27.983 }, 00:32:27.983 { 00:32:27.983 "method": "bdev_nvme_set_hotplug", 00:32:27.983 "params": { 00:32:27.983 "period_us": 100000, 00:32:27.983 "enable": false 00:32:27.983 } 00:32:27.983 }, 00:32:27.983 { 00:32:27.983 "method": "bdev_wait_for_examine" 00:32:27.983 } 00:32:27.983 ] 00:32:27.983 }, 00:32:27.983 { 00:32:27.983 "subsystem": "nbd", 00:32:27.983 "config": [] 00:32:27.983 } 00:32:27.983 ] 00:32:27.983 }' 00:32:27.983 21:48:17 keyring_file -- keyring/file.sh@114 -- # killprocess 2413789 00:32:27.983 21:48:17 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2413789 ']' 00:32:27.983 21:48:17 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2413789 00:32:27.983 21:48:17 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:27.983 21:48:17 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:27.983 21:48:17 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2413789 00:32:27.983 21:48:17 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:27.983 21:48:17 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:27.983 21:48:17 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2413789' 00:32:27.983 killing process with pid 2413789 00:32:27.983 21:48:17 keyring_file -- common/autotest_common.sh@967 -- # kill 2413789 00:32:27.983 Received shutdown signal, test time was about 1.000000 seconds 00:32:27.983 00:32:27.983 Latency(us) 00:32:27.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.983 =================================================================================================================== 00:32:27.983 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:27.983 21:48:17 keyring_file -- common/autotest_common.sh@972 -- # wait 2413789 00:32:28.243 21:48:17 keyring_file -- keyring/file.sh@117 -- # bperfpid=2415710 00:32:28.243 21:48:17 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2415710 /var/tmp/bperf.sock 00:32:28.243 21:48:17 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2415710 ']' 00:32:28.243 21:48:17 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:28.243 21:48:17 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:28.243 21:48:17 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:28.243 21:48:17 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:28.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:28.243 21:48:17 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:28.243 21:48:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:28.243 21:48:17 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:28.243 "subsystems": [ 00:32:28.243 { 00:32:28.243 "subsystem": "keyring", 00:32:28.243 "config": [ 00:32:28.243 { 00:32:28.243 "method": "keyring_file_add_key", 00:32:28.243 "params": { 00:32:28.243 "name": "key0", 00:32:28.243 "path": "/tmp/tmp.0FOZjE9aaV" 00:32:28.243 } 00:32:28.243 }, 00:32:28.243 { 00:32:28.243 "method": "keyring_file_add_key", 00:32:28.243 "params": { 00:32:28.243 "name": "key1", 00:32:28.243 "path": "/tmp/tmp.LLyuZ5TrNY" 00:32:28.243 } 00:32:28.243 } 00:32:28.243 ] 00:32:28.243 }, 00:32:28.243 { 00:32:28.243 "subsystem": "iobuf", 00:32:28.243 "config": [ 00:32:28.243 { 00:32:28.243 "method": "iobuf_set_options", 00:32:28.243 "params": { 00:32:28.243 "small_pool_count": 8192, 00:32:28.243 "large_pool_count": 1024, 00:32:28.243 "small_bufsize": 8192, 00:32:28.243 "large_bufsize": 135168 00:32:28.243 } 00:32:28.243 } 00:32:28.243 ] 00:32:28.243 }, 00:32:28.243 { 00:32:28.243 "subsystem": "sock", 00:32:28.243 "config": [ 00:32:28.243 { 00:32:28.243 "method": "sock_set_default_impl", 00:32:28.243 "params": { 00:32:28.243 "impl_name": "posix" 00:32:28.243 } 00:32:28.243 }, 00:32:28.243 { 00:32:28.243 "method": "sock_impl_set_options", 00:32:28.243 "params": { 00:32:28.243 "impl_name": "ssl", 00:32:28.243 "recv_buf_size": 4096, 00:32:28.243 "send_buf_size": 4096, 00:32:28.243 "enable_recv_pipe": true, 00:32:28.243 "enable_quickack": false, 00:32:28.243 "enable_placement_id": 0, 00:32:28.244 "enable_zerocopy_send_server": true, 00:32:28.244 "enable_zerocopy_send_client": false, 00:32:28.244 "zerocopy_threshold": 0, 00:32:28.244 "tls_version": 0, 00:32:28.244 "enable_ktls": false 00:32:28.244 } 00:32:28.244 }, 00:32:28.244 { 00:32:28.244 "method": "sock_impl_set_options", 00:32:28.244 "params": { 00:32:28.244 "impl_name": "posix", 00:32:28.244 "recv_buf_size": 2097152, 00:32:28.244 "send_buf_size": 2097152, 00:32:28.244 "enable_recv_pipe": true, 00:32:28.244 "enable_quickack": false, 00:32:28.244 "enable_placement_id": 0, 00:32:28.244 "enable_zerocopy_send_server": true, 00:32:28.244 "enable_zerocopy_send_client": false, 00:32:28.244 "zerocopy_threshold": 0, 00:32:28.244 "tls_version": 0, 00:32:28.244 "enable_ktls": false 00:32:28.244 } 00:32:28.244 } 00:32:28.244 ] 00:32:28.244 }, 00:32:28.244 { 00:32:28.244 "subsystem": "vmd", 00:32:28.244 "config": [] 00:32:28.244 }, 00:32:28.244 { 00:32:28.244 "subsystem": "accel", 00:32:28.244 "config": [ 00:32:28.244 { 00:32:28.244 "method": "accel_set_options", 00:32:28.244 "params": { 00:32:28.244 "small_cache_size": 128, 00:32:28.244 "large_cache_size": 16, 00:32:28.244 "task_count": 2048, 00:32:28.244 "sequence_count": 2048, 00:32:28.244 "buf_count": 2048 00:32:28.244 } 00:32:28.244 } 00:32:28.244 ] 00:32:28.244 }, 00:32:28.244 { 00:32:28.244 "subsystem": "bdev", 00:32:28.244 "config": [ 00:32:28.244 { 00:32:28.244 "method": "bdev_set_options", 00:32:28.244 "params": { 00:32:28.244 "bdev_io_pool_size": 65535, 00:32:28.244 "bdev_io_cache_size": 256, 00:32:28.244 "bdev_auto_examine": true, 00:32:28.244 "iobuf_small_cache_size": 128, 00:32:28.244 "iobuf_large_cache_size": 16 00:32:28.244 } 00:32:28.244 }, 00:32:28.244 { 00:32:28.244 "method": "bdev_raid_set_options", 00:32:28.244 "params": { 00:32:28.244 "process_window_size_kb": 1024 00:32:28.244 } 00:32:28.244 }, 00:32:28.244 { 00:32:28.244 "method": "bdev_iscsi_set_options", 00:32:28.244 "params": { 00:32:28.244 "timeout_sec": 30 00:32:28.244 } 00:32:28.244 }, 00:32:28.244 { 00:32:28.244 "method": "bdev_nvme_set_options", 00:32:28.244 "params": { 00:32:28.244 "action_on_timeout": "none", 00:32:28.244 "timeout_us": 0, 00:32:28.244 "timeout_admin_us": 0, 00:32:28.244 "keep_alive_timeout_ms": 10000, 00:32:28.244 "arbitration_burst": 0, 00:32:28.244 "low_priority_weight": 0, 00:32:28.244 "medium_priority_weight": 0, 00:32:28.244 "high_priority_weight": 0, 00:32:28.244 "nvme_adminq_poll_period_us": 10000, 00:32:28.244 "nvme_ioq_poll_period_us": 0, 00:32:28.244 "io_queue_requests": 512, 00:32:28.244 "delay_cmd_submit": true, 00:32:28.244 "transport_retry_count": 4, 00:32:28.244 "bdev_retry_count": 3, 00:32:28.244 "transport_ack_timeout": 0, 00:32:28.244 "ctrlr_loss_timeout_sec": 0, 00:32:28.244 "reconnect_delay_sec": 0, 00:32:28.244 "fast_io_fail_timeout_sec": 0, 00:32:28.244 "disable_auto_failback": false, 00:32:28.244 "generate_uuids": false, 00:32:28.244 "transport_tos": 0, 00:32:28.244 "nvme_error_stat": false, 00:32:28.244 "rdma_srq_size": 0, 00:32:28.244 "io_path_stat": false, 00:32:28.244 "allow_accel_sequence": false, 00:32:28.244 "rdma_max_cq_size": 0, 00:32:28.244 "rdma_cm_event_timeout_ms": 0, 00:32:28.244 "dhchap_digests": [ 00:32:28.244 "sha256", 00:32:28.244 "sha384", 00:32:28.244 "sha512" 00:32:28.244 ], 00:32:28.244 "dhchap_dhgroups": [ 00:32:28.244 "null", 00:32:28.244 "ffdhe2048", 00:32:28.244 "ffdhe3072", 00:32:28.244 "ffdhe4096", 00:32:28.244 "ffdhe6144", 00:32:28.244 "ffdhe8192" 00:32:28.244 ] 00:32:28.244 } 00:32:28.244 }, 00:32:28.244 { 00:32:28.244 "method": "bdev_nvme_attach_controller", 00:32:28.244 "params": { 00:32:28.244 "name": "nvme0", 00:32:28.244 "trtype": "TCP", 00:32:28.244 "adrfam": "IPv4", 00:32:28.244 "traddr": "127.0.0.1", 00:32:28.244 "trsvcid": "4420", 00:32:28.244 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:28.244 "prchk_reftag": false, 00:32:28.244 "prchk_guard": false, 00:32:28.244 "ctrlr_loss_timeout_sec": 0, 00:32:28.244 "reconnect_delay_sec": 0, 00:32:28.244 "fast_io_fail_timeout_sec": 0, 00:32:28.244 "psk": "key0", 00:32:28.244 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:28.244 "hdgst": false, 00:32:28.244 "ddgst": false 00:32:28.244 } 00:32:28.244 }, 00:32:28.244 { 00:32:28.244 "method": "bdev_nvme_set_hotplug", 00:32:28.244 "params": { 00:32:28.244 "period_us": 100000, 00:32:28.244 "enable": false 00:32:28.244 } 00:32:28.244 }, 00:32:28.244 { 00:32:28.244 "method": "bdev_wait_for_examine" 00:32:28.244 } 00:32:28.244 ] 00:32:28.244 }, 00:32:28.244 { 00:32:28.244 "subsystem": "nbd", 00:32:28.244 "config": [] 00:32:28.244 } 00:32:28.244 ] 00:32:28.244 }' 00:32:28.244 [2024-07-15 21:48:17.906152] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:32:28.244 [2024-07-15 21:48:17.906210] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2415710 ] 00:32:28.244 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.244 [2024-07-15 21:48:17.981154] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.244 [2024-07-15 21:48:18.034192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.503 [2024-07-15 21:48:18.175808] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:29.073 21:48:18 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:29.073 21:48:18 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:29.073 21:48:18 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:29.073 21:48:18 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:29.073 21:48:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.073 21:48:18 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:29.073 21:48:18 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:29.073 21:48:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:29.073 21:48:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:29.073 21:48:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:29.073 21:48:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:29.073 21:48:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.334 21:48:18 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:29.334 21:48:18 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:29.334 21:48:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:29.334 21:48:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:29.334 21:48:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:29.334 21:48:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.334 21:48:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:29.594 21:48:19 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:29.594 21:48:19 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:29.594 21:48:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:29.594 21:48:19 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:29.594 21:48:19 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:29.594 21:48:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:29.594 21:48:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.0FOZjE9aaV /tmp/tmp.LLyuZ5TrNY 00:32:29.594 21:48:19 keyring_file -- keyring/file.sh@20 -- # killprocess 2415710 00:32:29.594 21:48:19 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2415710 ']' 00:32:29.594 21:48:19 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2415710 00:32:29.594 21:48:19 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:29.594 21:48:19 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:29.594 21:48:19 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2415710 00:32:29.594 21:48:19 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:29.594 21:48:19 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:29.594 21:48:19 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2415710' 00:32:29.594 killing process with pid 2415710 00:32:29.594 21:48:19 keyring_file -- common/autotest_common.sh@967 -- # kill 2415710 00:32:29.594 Received shutdown signal, test time was about 1.000000 seconds 00:32:29.594 00:32:29.594 Latency(us) 00:32:29.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.594 =================================================================================================================== 00:32:29.594 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:29.594 21:48:19 keyring_file -- common/autotest_common.sh@972 -- # wait 2415710 00:32:29.854 21:48:19 keyring_file -- keyring/file.sh@21 -- # killprocess 2413585 00:32:29.854 21:48:19 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2413585 ']' 00:32:29.854 21:48:19 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2413585 00:32:29.854 21:48:19 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:29.854 21:48:19 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:29.854 21:48:19 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2413585 00:32:29.854 21:48:19 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:29.854 21:48:19 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:29.854 21:48:19 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2413585' 00:32:29.854 killing process with pid 2413585 00:32:29.854 21:48:19 keyring_file -- common/autotest_common.sh@967 -- # kill 2413585 00:32:29.854 [2024-07-15 21:48:19.524063] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:29.854 21:48:19 keyring_file -- common/autotest_common.sh@972 -- # wait 2413585 00:32:30.115 00:32:30.115 real 0m11.040s 00:32:30.115 user 0m25.702s 00:32:30.115 sys 0m2.569s 00:32:30.115 21:48:19 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:30.115 21:48:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:30.115 ************************************ 00:32:30.115 END TEST keyring_file 00:32:30.115 ************************************ 00:32:30.115 21:48:19 -- common/autotest_common.sh@1142 -- # return 0 00:32:30.115 21:48:19 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:30.115 21:48:19 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:30.115 21:48:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:30.115 21:48:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:30.115 21:48:19 -- common/autotest_common.sh@10 -- # set +x 00:32:30.115 ************************************ 00:32:30.115 START TEST keyring_linux 00:32:30.115 ************************************ 00:32:30.115 21:48:19 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:30.115 * Looking for test storage... 00:32:30.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:30.115 21:48:19 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:30.115 21:48:19 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:30.376 21:48:19 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.376 21:48:19 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.376 21:48:19 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.376 21:48:19 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.376 21:48:19 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.376 21:48:19 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.376 21:48:19 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:30.376 21:48:19 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:30.376 21:48:19 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:30.376 21:48:19 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:30.376 21:48:19 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:30.376 21:48:19 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:30.376 21:48:19 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:30.376 21:48:19 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:30.376 21:48:19 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:30.376 21:48:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:30.376 21:48:19 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:30.376 21:48:19 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:30.376 21:48:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:30.376 21:48:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:30.376 21:48:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:30.376 21:48:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:30.376 21:48:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:30.376 /tmp/:spdk-test:key0 00:32:30.376 21:48:19 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:30.376 21:48:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:30.376 21:48:19 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:30.376 21:48:19 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:30.376 21:48:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:30.376 21:48:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:30.376 21:48:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:30.376 21:48:19 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:30.376 21:48:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:30.376 21:48:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:30.376 /tmp/:spdk-test:key1 00:32:30.376 21:48:20 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:30.376 21:48:20 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2416167 00:32:30.376 21:48:20 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2416167 00:32:30.376 21:48:20 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2416167 ']' 00:32:30.376 21:48:20 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.376 21:48:20 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:30.376 21:48:20 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.376 21:48:20 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:30.376 21:48:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:30.376 [2024-07-15 21:48:20.079267] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:32:30.376 [2024-07-15 21:48:20.079324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2416167 ] 00:32:30.376 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.376 [2024-07-15 21:48:20.135938] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.636 [2024-07-15 21:48:20.200905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.207 21:48:20 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:31.207 21:48:20 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:31.207 21:48:20 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:31.207 21:48:20 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.207 21:48:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:31.207 [2024-07-15 21:48:20.867888] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.207 null0 00:32:31.207 [2024-07-15 21:48:20.899930] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:31.207 [2024-07-15 21:48:20.900324] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:31.207 21:48:20 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.207 21:48:20 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:31.207 469519454 00:32:31.207 21:48:20 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:31.207 758155201 00:32:31.207 21:48:20 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2416252 00:32:31.207 21:48:20 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2416252 /var/tmp/bperf.sock 00:32:31.208 21:48:20 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:31.208 21:48:20 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2416252 ']' 00:32:31.208 21:48:20 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:31.208 21:48:20 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:31.208 21:48:20 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:31.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:31.208 21:48:20 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:31.208 21:48:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:31.208 [2024-07-15 21:48:20.977116] Starting SPDK v24.09-pre git sha1 b26ca8289 / DPDK 24.03.0 initialization... 00:32:31.208 [2024-07-15 21:48:20.977168] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2416252 ] 00:32:31.208 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.468 [2024-07-15 21:48:21.051827] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.468 [2024-07-15 21:48:21.105990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.038 21:48:21 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:32.038 21:48:21 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:32.038 21:48:21 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:32.038 21:48:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:32.298 21:48:21 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:32.298 21:48:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:32.298 21:48:22 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:32.298 21:48:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:32.558 [2024-07-15 21:48:22.204421] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:32.558 nvme0n1 00:32:32.558 21:48:22 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:32.558 21:48:22 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:32.558 21:48:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:32.558 21:48:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:32.558 21:48:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:32.558 21:48:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.818 21:48:22 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:32.818 21:48:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:32.818 21:48:22 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:32.818 21:48:22 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:32.818 21:48:22 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:32.818 21:48:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.818 21:48:22 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:32.818 21:48:22 keyring_linux -- keyring/linux.sh@25 -- # sn=469519454 00:32:32.818 21:48:22 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:32.818 21:48:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:33.079 21:48:22 keyring_linux -- keyring/linux.sh@26 -- # [[ 469519454 == \4\6\9\5\1\9\4\5\4 ]] 00:32:33.079 21:48:22 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 469519454 00:32:33.079 21:48:22 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:33.079 21:48:22 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:33.079 Running I/O for 1 seconds... 00:32:34.020 00:32:34.020 Latency(us) 00:32:34.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:34.020 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:34.020 nvme0n1 : 1.01 9109.90 35.59 0.00 0.00 13943.02 8574.29 23156.05 00:32:34.020 =================================================================================================================== 00:32:34.020 Total : 9109.90 35.59 0.00 0.00 13943.02 8574.29 23156.05 00:32:34.020 0 00:32:34.020 21:48:23 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:34.020 21:48:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:34.296 21:48:23 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:34.296 21:48:23 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:34.296 21:48:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:34.296 21:48:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:34.296 21:48:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:34.296 21:48:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.296 21:48:24 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:34.296 21:48:24 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:34.296 21:48:24 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:34.296 21:48:24 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:34.296 21:48:24 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:32:34.296 21:48:24 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:34.296 21:48:24 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:34.296 21:48:24 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:34.296 21:48:24 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:34.296 21:48:24 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:34.296 21:48:24 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:34.296 21:48:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:34.557 [2024-07-15 21:48:24.221281] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:34.557 [2024-07-15 21:48:24.221857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a2b0 (107): Transport endpoint is not connected 00:32:34.557 [2024-07-15 21:48:24.222854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a2b0 (9): Bad file descriptor 00:32:34.557 [2024-07-15 21:48:24.223855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:34.557 [2024-07-15 21:48:24.223861] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:34.557 [2024-07-15 21:48:24.223866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:34.557 request: 00:32:34.557 { 00:32:34.557 "name": "nvme0", 00:32:34.557 "trtype": "tcp", 00:32:34.557 "traddr": "127.0.0.1", 00:32:34.557 "adrfam": "ipv4", 00:32:34.557 "trsvcid": "4420", 00:32:34.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:34.557 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:34.557 "prchk_reftag": false, 00:32:34.557 "prchk_guard": false, 00:32:34.557 "hdgst": false, 00:32:34.557 "ddgst": false, 00:32:34.557 "psk": ":spdk-test:key1", 00:32:34.557 "method": "bdev_nvme_attach_controller", 00:32:34.557 "req_id": 1 00:32:34.557 } 00:32:34.557 Got JSON-RPC error response 00:32:34.557 response: 00:32:34.557 { 00:32:34.557 "code": -5, 00:32:34.557 "message": "Input/output error" 00:32:34.557 } 00:32:34.557 21:48:24 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:32:34.557 21:48:24 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:34.557 21:48:24 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:34.557 21:48:24 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:34.557 21:48:24 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:34.557 21:48:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:34.557 21:48:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:34.557 21:48:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:34.557 21:48:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:34.557 21:48:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:34.557 21:48:24 keyring_linux -- keyring/linux.sh@33 -- # sn=469519454 00:32:34.557 21:48:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 469519454 00:32:34.557 1 links removed 00:32:34.557 21:48:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:34.557 21:48:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:34.557 21:48:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:34.557 21:48:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:34.557 21:48:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:34.557 21:48:24 keyring_linux -- keyring/linux.sh@33 -- # sn=758155201 00:32:34.557 21:48:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 758155201 00:32:34.557 1 links removed 00:32:34.557 21:48:24 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2416252 00:32:34.557 21:48:24 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2416252 ']' 00:32:34.557 21:48:24 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2416252 00:32:34.557 21:48:24 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:34.557 21:48:24 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:34.557 21:48:24 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2416252 00:32:34.557 21:48:24 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:34.557 21:48:24 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:34.557 21:48:24 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2416252' 00:32:34.557 killing process with pid 2416252 00:32:34.557 21:48:24 keyring_linux -- common/autotest_common.sh@967 -- # kill 2416252 00:32:34.557 Received shutdown signal, test time was about 1.000000 seconds 00:32:34.557 00:32:34.557 Latency(us) 00:32:34.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:34.557 =================================================================================================================== 00:32:34.557 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:34.557 21:48:24 keyring_linux -- common/autotest_common.sh@972 -- # wait 2416252 00:32:34.818 21:48:24 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2416167 00:32:34.818 21:48:24 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2416167 ']' 00:32:34.818 21:48:24 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2416167 00:32:34.818 21:48:24 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:34.818 21:48:24 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:34.818 21:48:24 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2416167 00:32:34.818 21:48:24 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:34.818 21:48:24 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:34.818 21:48:24 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2416167' 00:32:34.818 killing process with pid 2416167 00:32:34.818 21:48:24 keyring_linux -- common/autotest_common.sh@967 -- # kill 2416167 00:32:34.818 21:48:24 keyring_linux -- common/autotest_common.sh@972 -- # wait 2416167 00:32:35.081 00:32:35.081 real 0m4.869s 00:32:35.081 user 0m8.386s 00:32:35.081 sys 0m1.224s 00:32:35.081 21:48:24 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:35.081 21:48:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:35.081 ************************************ 00:32:35.081 END TEST keyring_linux 00:32:35.081 ************************************ 00:32:35.081 21:48:24 -- common/autotest_common.sh@1142 -- # return 0 00:32:35.081 21:48:24 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:35.081 21:48:24 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:35.081 21:48:24 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:35.081 21:48:24 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:35.081 21:48:24 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:35.081 21:48:24 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:35.081 21:48:24 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:35.081 21:48:24 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:35.081 21:48:24 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:35.081 21:48:24 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:35.081 21:48:24 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:35.081 21:48:24 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:35.081 21:48:24 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:35.081 21:48:24 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:35.081 21:48:24 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:35.081 21:48:24 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:35.081 21:48:24 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:35.081 21:48:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:35.081 21:48:24 -- common/autotest_common.sh@10 -- # set +x 00:32:35.081 21:48:24 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:35.081 21:48:24 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:35.081 21:48:24 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:35.081 21:48:24 -- common/autotest_common.sh@10 -- # set +x 00:32:43.226 INFO: APP EXITING 00:32:43.226 INFO: killing all VMs 00:32:43.226 INFO: killing vhost app 00:32:43.226 WARN: no vhost pid file found 00:32:43.226 INFO: EXIT DONE 00:32:46.597 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:32:46.597 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:32:46.597 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:32:46.597 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:32:46.597 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:32:46.598 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:32:46.598 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:32:46.598 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:32:46.598 0000:65:00.0 (144d a80a): Already using the nvme driver 00:32:46.598 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:32:46.598 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:32:46.598 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:32:46.598 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:32:46.598 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:32:46.598 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:32:46.598 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:32:46.598 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:32:49.903 Cleaning 00:32:49.903 Removing: /var/run/dpdk/spdk0/config 00:32:49.903 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:49.903 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:49.903 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:49.903 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:49.903 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:49.903 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:49.903 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:49.903 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:49.903 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:49.903 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:49.903 Removing: /var/run/dpdk/spdk1/config 00:32:49.903 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:49.903 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:49.903 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:49.903 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:49.903 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:49.903 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:49.903 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:49.903 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:49.903 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:49.903 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:49.903 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:49.903 Removing: /var/run/dpdk/spdk2/config 00:32:49.903 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:49.903 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:49.903 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:49.903 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:49.903 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:49.903 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:49.903 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:49.903 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:49.903 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:49.903 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:49.903 Removing: /var/run/dpdk/spdk3/config 00:32:49.903 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:49.903 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:49.903 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:49.903 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:49.903 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:49.904 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:49.904 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:49.904 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:49.904 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:49.904 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:49.904 Removing: /var/run/dpdk/spdk4/config 00:32:49.904 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:49.904 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:49.904 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:49.904 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:49.904 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:49.904 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:49.904 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:49.904 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:49.904 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:49.904 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:49.904 Removing: /dev/shm/bdev_svc_trace.1 00:32:49.904 Removing: /dev/shm/nvmf_trace.0 00:32:49.904 Removing: /dev/shm/spdk_tgt_trace.pid1957891 00:32:49.904 Removing: /var/run/dpdk/spdk0 00:32:49.904 Removing: /var/run/dpdk/spdk1 00:32:49.904 Removing: /var/run/dpdk/spdk2 00:32:49.904 Removing: /var/run/dpdk/spdk3 00:32:49.904 Removing: /var/run/dpdk/spdk4 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1956163 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1957891 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1958420 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1960042 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1960263 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1961455 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1961659 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1961939 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1962907 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1963604 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1963871 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1964153 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1964540 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1964927 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1965286 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1965510 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1965752 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1967088 00:32:49.904 Removing: /var/run/dpdk/spdk_pid1970343 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1970710 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1971079 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1971297 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1971782 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1971816 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1972390 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1972501 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1972868 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1972979 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1973239 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1973426 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1974002 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1974169 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1974445 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1974807 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1974841 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1975114 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1975287 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1975607 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1975955 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1976304 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1976565 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1976744 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1977045 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1977398 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1977747 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1978021 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1978207 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1978486 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1978835 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1979188 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1979485 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1979662 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1979930 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1980282 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1980631 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1980987 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1981053 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1981380 00:32:50.165 Removing: /var/run/dpdk/spdk_pid1985712 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2039387 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2044417 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2056300 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2063291 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2068151 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2068987 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2076161 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2083177 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2083227 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2084269 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2085313 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2086422 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2087059 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2087185 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2087411 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2087534 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2087536 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2088544 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2089554 00:32:50.165 Removing: /var/run/dpdk/spdk_pid2090579 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2091228 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2091338 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2091589 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2092997 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2094387 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2104423 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2104777 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2109917 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2117237 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2120255 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2132531 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2143097 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2145193 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2146356 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2167049 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2171544 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2203297 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2208521 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2210940 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2213281 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2213420 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2213642 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2213980 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2214598 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2216711 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2217793 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2218494 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2220945 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2221752 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2222615 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2227346 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2239433 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2244401 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2251609 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2253164 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2255046 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2260502 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2265343 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2274359 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2274400 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2279267 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2279454 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2279790 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2280170 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2280289 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2285741 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2286330 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2291507 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2294839 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2301210 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2307733 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2318183 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2326634 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2326679 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2348997 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2349680 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2350404 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2351217 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2352142 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2352942 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2353710 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2354496 00:32:50.427 Removing: /var/run/dpdk/spdk_pid2359536 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2359822 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2367467 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2367722 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2370353 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2377669 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2377790 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2383645 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2385844 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2388356 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2389549 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2392070 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2393334 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2403198 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2403854 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2404435 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2407227 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2407830 00:32:50.689 Removing: /var/run/dpdk/spdk_pid2408498 00:32:50.690 Removing: /var/run/dpdk/spdk_pid2413585 00:32:50.690 Removing: /var/run/dpdk/spdk_pid2413789 00:32:50.690 Removing: /var/run/dpdk/spdk_pid2415710 00:32:50.690 Removing: /var/run/dpdk/spdk_pid2416167 00:32:50.690 Removing: /var/run/dpdk/spdk_pid2416252 00:32:50.690 Clean 00:32:50.690 21:48:40 -- common/autotest_common.sh@1451 -- # return 0 00:32:50.690 21:48:40 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:50.690 21:48:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:50.690 21:48:40 -- common/autotest_common.sh@10 -- # set +x 00:32:50.690 21:48:40 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:50.690 21:48:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:50.690 21:48:40 -- common/autotest_common.sh@10 -- # set +x 00:32:50.952 21:48:40 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:50.952 21:48:40 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:50.952 21:48:40 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:50.952 21:48:40 -- spdk/autotest.sh@391 -- # hash lcov 00:32:50.952 21:48:40 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:50.952 21:48:40 -- spdk/autotest.sh@393 -- # hostname 00:32:50.952 21:48:40 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:50.952 geninfo: WARNING: invalid characters removed from testname! 00:33:17.528 21:49:04 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:17.787 21:49:07 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:20.327 21:49:09 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:21.709 21:49:11 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:23.628 21:49:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:25.012 21:49:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:26.933 21:49:16 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:26.933 21:49:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:26.933 21:49:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:26.933 21:49:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.933 21:49:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.933 21:49:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.933 21:49:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.933 21:49:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.933 21:49:16 -- paths/export.sh@5 -- $ export PATH 00:33:26.933 21:49:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.933 21:49:16 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:26.933 21:49:16 -- common/autobuild_common.sh@444 -- $ date +%s 00:33:26.933 21:49:16 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721072956.XXXXXX 00:33:26.933 21:49:16 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721072956.lbzm0G 00:33:26.933 21:49:16 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:33:26.933 21:49:16 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:33:26.933 21:49:16 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:26.933 21:49:16 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:26.934 21:49:16 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:26.934 21:49:16 -- common/autobuild_common.sh@460 -- $ get_config_params 00:33:26.934 21:49:16 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:26.934 21:49:16 -- common/autotest_common.sh@10 -- $ set +x 00:33:26.934 21:49:16 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:26.934 21:49:16 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:33:26.934 21:49:16 -- pm/common@17 -- $ local monitor 00:33:26.934 21:49:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:26.934 21:49:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:26.934 21:49:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:26.934 21:49:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:26.934 21:49:16 -- pm/common@21 -- $ date +%s 00:33:26.934 21:49:16 -- pm/common@21 -- $ date +%s 00:33:26.934 21:49:16 -- pm/common@25 -- $ sleep 1 00:33:26.934 21:49:16 -- pm/common@21 -- $ date +%s 00:33:26.934 21:49:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721072956 00:33:26.934 21:49:16 -- pm/common@21 -- $ date +%s 00:33:26.934 21:49:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721072956 00:33:26.934 21:49:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721072956 00:33:26.934 21:49:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721072956 00:33:26.934 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721072956_collect-vmstat.pm.log 00:33:26.934 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721072956_collect-cpu-load.pm.log 00:33:26.934 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721072956_collect-cpu-temp.pm.log 00:33:26.934 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721072956_collect-bmc-pm.bmc.pm.log 00:33:27.583 21:49:17 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:33:27.583 21:49:17 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:27.583 21:49:17 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:27.583 21:49:17 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:27.583 21:49:17 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:27.583 21:49:17 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:27.583 21:49:17 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:27.583 21:49:17 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:27.583 21:49:17 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:27.583 21:49:17 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:27.583 21:49:17 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:27.583 21:49:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:27.583 21:49:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:27.583 21:49:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:27.583 21:49:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:27.583 21:49:17 -- pm/common@44 -- $ pid=2428627 00:33:27.583 21:49:17 -- pm/common@50 -- $ kill -TERM 2428627 00:33:27.583 21:49:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:27.583 21:49:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:27.583 21:49:17 -- pm/common@44 -- $ pid=2428628 00:33:27.583 21:49:17 -- pm/common@50 -- $ kill -TERM 2428628 00:33:27.583 21:49:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:27.583 21:49:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:27.583 21:49:17 -- pm/common@44 -- $ pid=2428630 00:33:27.583 21:49:17 -- pm/common@50 -- $ kill -TERM 2428630 00:33:27.583 21:49:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:27.583 21:49:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:27.583 21:49:17 -- pm/common@44 -- $ pid=2428653 00:33:27.583 21:49:17 -- pm/common@50 -- $ sudo -E kill -TERM 2428653 00:33:27.844 + [[ -n 1836380 ]] 00:33:27.844 + sudo kill 1836380 00:33:27.855 [Pipeline] } 00:33:27.875 [Pipeline] // stage 00:33:27.882 [Pipeline] } 00:33:27.901 [Pipeline] // timeout 00:33:27.907 [Pipeline] } 00:33:27.924 [Pipeline] // catchError 00:33:27.930 [Pipeline] } 00:33:27.950 [Pipeline] // wrap 00:33:27.956 [Pipeline] } 00:33:27.972 [Pipeline] // catchError 00:33:27.981 [Pipeline] stage 00:33:27.983 [Pipeline] { (Epilogue) 00:33:28.000 [Pipeline] catchError 00:33:28.002 [Pipeline] { 00:33:28.016 [Pipeline] echo 00:33:28.018 Cleanup processes 00:33:28.024 [Pipeline] sh 00:33:28.311 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:28.311 2428731 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:28.311 2429179 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:28.326 [Pipeline] sh 00:33:28.610 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:28.610 ++ grep -v 'sudo pgrep' 00:33:28.610 ++ awk '{print $1}' 00:33:28.610 + sudo kill -9 2428731 00:33:28.622 [Pipeline] sh 00:33:28.906 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:38.908 [Pipeline] sh 00:33:39.185 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:39.186 Artifacts sizes are good 00:33:39.196 [Pipeline] archiveArtifacts 00:33:39.201 Archiving artifacts 00:33:39.378 [Pipeline] sh 00:33:39.659 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:39.672 [Pipeline] cleanWs 00:33:39.681 [WS-CLEANUP] Deleting project workspace... 00:33:39.681 [WS-CLEANUP] Deferred wipeout is used... 00:33:39.687 [WS-CLEANUP] done 00:33:39.688 [Pipeline] } 00:33:39.703 [Pipeline] // catchError 00:33:39.713 [Pipeline] sh 00:33:39.999 + logger -p user.info -t JENKINS-CI 00:33:40.008 [Pipeline] } 00:33:40.022 [Pipeline] // stage 00:33:40.026 [Pipeline] } 00:33:40.042 [Pipeline] // node 00:33:40.046 [Pipeline] End of Pipeline 00:33:40.099 Finished: SUCCESS